Jul 10 00:23:22.003594 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:23:22.003623 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:22.003634 kernel: BIOS-provided physical RAM map: Jul 10 00:23:22.003641 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:23:22.003648 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 10 00:23:22.003655 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 10 00:23:22.003663 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 10 00:23:22.003672 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 10 00:23:22.003679 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 10 00:23:22.003686 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 10 00:23:22.003694 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 10 00:23:22.003700 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 10 00:23:22.003707 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 10 00:23:22.003715 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 10 00:23:22.003725 kernel: NX (Execute Disable) protection: active Jul 10 00:23:22.003733 kernel: APIC: Static calls initialized Jul 10 00:23:22.003740 kernel: efi: EFI v2.7 by Microsoft Jul 10 00:23:22.003748 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Jul 10 00:23:22.003756 kernel: random: crng init done Jul 10 00:23:22.003763 kernel: secureboot: Secure boot disabled Jul 10 00:23:22.003771 kernel: SMBIOS 3.1.0 present. Jul 10 00:23:22.003779 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 10 00:23:22.003786 kernel: DMI: Memory slots populated: 2/2 Jul 10 00:23:22.003795 kernel: Hypervisor detected: Microsoft Hyper-V Jul 10 00:23:22.003803 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 10 00:23:22.003810 kernel: Hyper-V: Nested features: 0x3e0101 Jul 10 00:23:22.003817 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 10 00:23:22.003824 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 10 00:23:22.003832 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 10 00:23:22.003840 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 10 00:23:22.003847 kernel: tsc: Detected 2299.997 MHz processor Jul 10 00:23:22.003855 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:23:22.003887 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:23:22.003905 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 10 00:23:22.003913 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:23:22.003920 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:23:22.003927 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 10 00:23:22.003934 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 10 00:23:22.003942 kernel: Using GB pages for direct mapping Jul 10 00:23:22.003949 kernel: ACPI: Early table checksum verification disabled Jul 10 00:23:22.003960 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 10 00:23:22.003970 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:22.003978 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:22.003985 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 10 00:23:22.003993 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 10 00:23:22.004001 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:22.004010 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:22.004019 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:22.004028 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 10 00:23:22.004036 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 10 00:23:22.004044 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:22.004052 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 10 00:23:22.004060 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 10 00:23:22.004068 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 10 00:23:22.004076 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 10 00:23:22.004085 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 10 00:23:22.004094 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 10 00:23:22.004102 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 10 00:23:22.004110 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 10 00:23:22.004118 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 10 00:23:22.004126 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 10 00:23:22.004134 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 10 00:23:22.004143 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 10 00:23:22.004151 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 10 00:23:22.004159 kernel: Zone ranges: Jul 10 00:23:22.006627 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:23:22.006650 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 10 00:23:22.006660 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 10 00:23:22.006668 kernel: Device empty Jul 10 00:23:22.006677 kernel: Movable zone start for each node Jul 10 00:23:22.006686 kernel: Early memory node ranges Jul 10 00:23:22.006695 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 00:23:22.006704 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 10 00:23:22.006713 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 10 00:23:22.006724 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 10 00:23:22.006733 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 10 00:23:22.006741 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 10 00:23:22.006750 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:23:22.006759 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 00:23:22.006767 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 10 00:23:22.006776 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 10 00:23:22.006784 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 10 00:23:22.006793 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:23:22.006803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:23:22.006811 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:23:22.006820 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 10 00:23:22.006828 kernel: TSC deadline timer available Jul 10 00:23:22.006836 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:23:22.006844 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:23:22.006852 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:23:22.006861 kernel: CPU topo: Max. threads per core: 2 Jul 10 00:23:22.007041 kernel: CPU topo: Num. cores per package: 1 Jul 10 00:23:22.007113 kernel: CPU topo: Num. threads per package: 2 Jul 10 00:23:22.007123 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 00:23:22.007132 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 10 00:23:22.007140 kernel: Booting paravirtualized kernel on Hyper-V Jul 10 00:23:22.007150 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:23:22.007159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 00:23:22.007167 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 00:23:22.007176 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 00:23:22.007186 kernel: pcpu-alloc: [0] 0 1 Jul 10 00:23:22.007196 kernel: Hyper-V: PV spinlocks enabled Jul 10 00:23:22.007205 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:23:22.007216 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:22.007226 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:23:22.007235 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 10 00:23:22.007244 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:23:22.007252 kernel: Fallback order for Node 0: 0 Jul 10 00:23:22.007261 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 10 00:23:22.007272 kernel: Policy zone: Normal Jul 10 00:23:22.007281 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:23:22.007290 kernel: software IO TLB: area num 2. Jul 10 00:23:22.007299 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:23:22.007307 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:23:22.007316 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:23:22.007325 kernel: Dynamic Preempt: voluntary Jul 10 00:23:22.007333 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:23:22.007344 kernel: rcu: RCU event tracing is enabled. Jul 10 00:23:22.007362 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:23:22.007372 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:23:22.007381 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:23:22.007392 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:23:22.007402 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:23:22.007411 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:23:22.007421 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:22.007431 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:22.007439 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:22.007449 kernel: Using NULL legacy PIC Jul 10 00:23:22.007460 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 10 00:23:22.007469 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:23:22.007479 kernel: Console: colour dummy device 80x25 Jul 10 00:23:22.007489 kernel: printk: legacy console [tty1] enabled Jul 10 00:23:22.007498 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:23:22.007507 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 10 00:23:22.007516 kernel: ACPI: Core revision 20240827 Jul 10 00:23:22.007527 kernel: Failed to register legacy timer interrupt Jul 10 00:23:22.007537 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:23:22.007546 kernel: x2apic enabled Jul 10 00:23:22.007555 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:23:22.007564 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 10 00:23:22.007573 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 10 00:23:22.007583 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 10 00:23:22.007593 kernel: Hyper-V: Using IPI hypercalls Jul 10 00:23:22.007602 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 10 00:23:22.007613 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 10 00:23:22.007622 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 10 00:23:22.007631 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 10 00:23:22.007640 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 10 00:23:22.007649 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 10 00:23:22.007658 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21273273464, max_idle_ns: 440795276752 ns Jul 10 00:23:22.007667 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299997) Jul 10 00:23:22.007677 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:23:22.007688 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 10 00:23:22.007697 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 10 00:23:22.007707 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:23:22.007716 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:23:22.007725 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:23:22.007734 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 10 00:23:22.007743 kernel: RETBleed: Vulnerable Jul 10 00:23:22.007752 kernel: Speculative Store Bypass: Vulnerable Jul 10 00:23:22.007761 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:23:22.007770 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:23:22.007779 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:23:22.007790 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:23:22.007799 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 10 00:23:22.007808 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 10 00:23:22.007817 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 10 00:23:22.007825 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 10 00:23:22.007834 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 10 00:23:22.007843 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 10 00:23:22.007851 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:23:22.007860 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 10 00:23:22.007895 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 10 00:23:22.007904 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 10 00:23:22.007915 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 10 00:23:22.007924 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 10 00:23:22.007933 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 10 00:23:22.007942 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 10 00:23:22.007951 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:23:22.007960 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:23:22.007968 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:23:22.007977 kernel: landlock: Up and running. Jul 10 00:23:22.007986 kernel: SELinux: Initializing. Jul 10 00:23:22.007994 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:23:22.008003 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:23:22.008012 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 10 00:23:22.008023 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 10 00:23:22.008032 kernel: signal: max sigframe size: 11952 Jul 10 00:23:22.008041 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:23:22.008050 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:23:22.008059 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:23:22.008068 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:23:22.008077 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:23:22.008085 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:23:22.008094 kernel: .... node #0, CPUs: #1 Jul 10 00:23:22.008106 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:23:22.008115 kernel: smpboot: Total of 2 processors activated (9199.98 BogoMIPS) Jul 10 00:23:22.008125 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 299988K reserved, 0K cma-reserved) Jul 10 00:23:22.008134 kernel: devtmpfs: initialized Jul 10 00:23:22.008143 kernel: x86/mm: Memory block size: 128MB Jul 10 00:23:22.008152 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 10 00:23:22.008162 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:23:22.008171 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:23:22.008180 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:23:22.008191 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:23:22.008200 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:23:22.008209 kernel: audit: type=2000 audit(1752106998.029:1): state=initialized audit_enabled=0 res=1 Jul 10 00:23:22.008218 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:23:22.008227 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:23:22.008235 kernel: cpuidle: using governor menu Jul 10 00:23:22.008244 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:23:22.008254 kernel: dca service started, version 1.12.1 Jul 10 00:23:22.008263 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 10 00:23:22.008274 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 10 00:23:22.008283 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:23:22.008293 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:23:22.008303 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:23:22.008312 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:23:22.008322 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:23:22.008331 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:23:22.008340 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:23:22.008352 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:23:22.008361 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:23:22.008370 kernel: ACPI: Interpreter enabled Jul 10 00:23:22.008380 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:23:22.008389 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:23:22.008398 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:23:22.008408 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 10 00:23:22.008417 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 10 00:23:22.008427 kernel: iommu: Default domain type: Translated Jul 10 00:23:22.008436 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:23:22.008447 kernel: efivars: Registered efivars operations Jul 10 00:23:22.008456 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:23:22.008465 kernel: PCI: System does not support PCI Jul 10 00:23:22.008475 kernel: vgaarb: loaded Jul 10 00:23:22.008484 kernel: clocksource: Switched to clocksource tsc-early Jul 10 00:23:22.008494 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:23:22.008503 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:23:22.008513 kernel: pnp: PnP ACPI init Jul 10 00:23:22.008522 kernel: pnp: PnP ACPI: found 3 devices Jul 10 00:23:22.008533 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:23:22.008543 kernel: NET: Registered PF_INET protocol family Jul 10 00:23:22.008552 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:23:22.008562 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 10 00:23:22.008571 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:23:22.008581 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:23:22.008591 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 10 00:23:22.008600 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 10 00:23:22.008612 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 10 00:23:22.008621 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 10 00:23:22.008631 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:23:22.008641 kernel: NET: Registered PF_XDP protocol family Jul 10 00:23:22.008650 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:23:22.008659 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 10 00:23:22.008669 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Jul 10 00:23:22.008679 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 10 00:23:22.008688 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 10 00:23:22.008700 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21273273464, max_idle_ns: 440795276752 ns Jul 10 00:23:22.008709 kernel: clocksource: Switched to clocksource tsc Jul 10 00:23:22.008719 kernel: Initialise system trusted keyrings Jul 10 00:23:22.008728 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 10 00:23:22.008738 kernel: Key type asymmetric registered Jul 10 00:23:22.008748 kernel: Asymmetric key parser 'x509' registered Jul 10 00:23:22.008757 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:23:22.008766 kernel: io scheduler mq-deadline registered Jul 10 00:23:22.008775 kernel: io scheduler kyber registered Jul 10 00:23:22.008785 kernel: io scheduler bfq registered Jul 10 00:23:22.008794 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:23:22.008803 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:23:22.008811 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:23:22.008820 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 10 00:23:22.008829 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:23:22.008838 kernel: i8042: PNP: No PS/2 controller found. Jul 10 00:23:22.009420 kernel: rtc_cmos 00:02: registered as rtc0 Jul 10 00:23:22.009514 kernel: rtc_cmos 00:02: setting system clock to 2025-07-10T00:23:21 UTC (1752107001) Jul 10 00:23:22.009590 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 10 00:23:22.009601 kernel: intel_pstate: Intel P-state driver initializing Jul 10 00:23:22.009611 kernel: efifb: probing for efifb Jul 10 00:23:22.009620 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 10 00:23:22.009630 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 10 00:23:22.009639 kernel: efifb: scrolling: redraw Jul 10 00:23:22.009648 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:23:22.009658 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 00:23:22.009669 kernel: fb0: EFI VGA frame buffer device Jul 10 00:23:22.009678 kernel: pstore: Using crash dump compression: deflate Jul 10 00:23:22.009688 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:23:22.009697 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:23:22.009707 kernel: Segment Routing with IPv6 Jul 10 00:23:22.009716 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:23:22.009725 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:23:22.009734 kernel: Key type dns_resolver registered Jul 10 00:23:22.009744 kernel: IPI shorthand broadcast: enabled Jul 10 00:23:22.009755 kernel: sched_clock: Marking stable (3147003775, 100411733)->(3571998231, -324582723) Jul 10 00:23:22.009764 kernel: registered taskstats version 1 Jul 10 00:23:22.009774 kernel: Loading compiled-in X.509 certificates Jul 10 00:23:22.009783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:23:22.009793 kernel: Demotion targets for Node 0: null Jul 10 00:23:22.009802 kernel: Key type .fscrypt registered Jul 10 00:23:22.009811 kernel: Key type fscrypt-provisioning registered Jul 10 00:23:22.009820 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:23:22.009829 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:23:22.009841 kernel: ima: No architecture policies found Jul 10 00:23:22.009851 kernel: clk: Disabling unused clocks Jul 10 00:23:22.009860 kernel: Warning: unable to open an initial console. Jul 10 00:23:22.012540 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:23:22.012552 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:23:22.012561 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:23:22.012570 kernel: Run /init as init process Jul 10 00:23:22.012579 kernel: with arguments: Jul 10 00:23:22.012588 kernel: /init Jul 10 00:23:22.012599 kernel: with environment: Jul 10 00:23:22.012607 kernel: HOME=/ Jul 10 00:23:22.012617 kernel: TERM=linux Jul 10 00:23:22.012625 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:23:22.012635 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:23:22.012649 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:23:22.012660 systemd[1]: Detected virtualization microsoft. Jul 10 00:23:22.012671 systemd[1]: Detected architecture x86-64. Jul 10 00:23:22.012680 systemd[1]: Running in initrd. Jul 10 00:23:22.012689 systemd[1]: No hostname configured, using default hostname. Jul 10 00:23:22.012699 systemd[1]: Hostname set to . Jul 10 00:23:22.012708 systemd[1]: Initializing machine ID from random generator. Jul 10 00:23:22.012717 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:23:22.012726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:23:22.012735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:23:22.012747 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:23:22.012757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:23:22.012767 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:23:22.012777 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:23:22.012788 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:23:22.012797 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:23:22.012807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:23:22.012818 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:23:22.012827 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:23:22.012837 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:23:22.012846 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:23:22.012856 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:23:22.012901 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:23:22.012912 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:23:22.012921 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:23:22.012931 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:23:22.012942 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:23:22.012952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:23:22.012962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:23:22.012971 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:23:22.012980 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:23:22.012990 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:23:22.012999 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:23:22.013009 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:23:22.013020 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:23:22.013030 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:23:22.013040 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:23:22.013059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:22.013070 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:23:22.013104 systemd-journald[206]: Collecting audit messages is disabled. Jul 10 00:23:22.013131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:23:22.013141 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:23:22.013152 systemd-journald[206]: Journal started Jul 10 00:23:22.013177 systemd-journald[206]: Runtime Journal (/run/log/journal/e6681b17323b4f56b416d05cf50be58f) is 8M, max 158.9M, 150.9M free. Jul 10 00:23:22.002052 systemd-modules-load[207]: Inserted module 'overlay' Jul 10 00:23:22.020914 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:23:22.025981 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:23:22.031831 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:23:22.040885 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:23:22.041227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:22.047715 kernel: Bridge firewalling registered Jul 10 00:23:22.045498 systemd-modules-load[207]: Inserted module 'br_netfilter' Jul 10 00:23:22.055186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:23:22.062032 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:23:22.065457 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:23:22.067520 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:23:22.075566 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:23:22.075881 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:23:22.087980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:23:22.091964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:23:22.093695 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:23:22.110527 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:23:22.116402 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:23:22.120959 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:23:22.140972 systemd-resolved[233]: Positive Trust Anchors: Jul 10 00:23:22.142746 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:23:22.147963 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:22.142784 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:23:22.160281 systemd-resolved[233]: Defaulting to hostname 'linux'. Jul 10 00:23:22.174941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:23:22.176226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:23:22.214886 kernel: SCSI subsystem initialized Jul 10 00:23:22.222881 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:23:22.231894 kernel: iscsi: registered transport (tcp) Jul 10 00:23:22.248998 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:23:22.249042 kernel: QLogic iSCSI HBA Driver Jul 10 00:23:22.262545 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:23:22.274984 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:23:22.280788 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:23:22.311524 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:23:22.315170 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:23:22.364886 kernel: raid6: avx512x4 gen() 43618 MB/s Jul 10 00:23:22.381879 kernel: raid6: avx512x2 gen() 42562 MB/s Jul 10 00:23:22.398875 kernel: raid6: avx512x1 gen() 25437 MB/s Jul 10 00:23:22.417876 kernel: raid6: avx2x4 gen() 35979 MB/s Jul 10 00:23:22.434876 kernel: raid6: avx2x2 gen() 37224 MB/s Jul 10 00:23:22.452308 kernel: raid6: avx2x1 gen() 30525 MB/s Jul 10 00:23:22.452403 kernel: raid6: using algorithm avx512x4 gen() 43618 MB/s Jul 10 00:23:22.471882 kernel: raid6: .... xor() 7424 MB/s, rmw enabled Jul 10 00:23:22.471902 kernel: raid6: using avx512x2 recovery algorithm Jul 10 00:23:22.489882 kernel: xor: automatically using best checksumming function avx Jul 10 00:23:22.609888 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:23:22.615531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:23:22.620989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:23:22.649175 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jul 10 00:23:22.653729 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:23:22.659947 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:23:22.677396 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 10 00:23:22.695109 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:23:22.698768 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:23:22.726827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:23:22.734109 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:23:22.784885 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:23:22.796154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:22.799035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:22.804559 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:22.813681 kernel: hv_vmbus: Vmbus version:5.3 Jul 10 00:23:22.813772 kernel: AES CTR mode by8 optimization enabled Jul 10 00:23:22.811235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:22.827670 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:23:22.827706 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:23:22.835128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:22.843812 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:22.850952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:22.864141 kernel: PTP clock support registered Jul 10 00:23:22.864176 kernel: hv_vmbus: registering driver hv_pci Jul 10 00:23:22.869901 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 10 00:23:22.874415 kernel: hv_utils: Registering HyperV Utility Driver Jul 10 00:23:22.874454 kernel: hv_vmbus: registering driver hv_utils Jul 10 00:23:22.875899 kernel: hv_utils: Shutdown IC version 3.2 Jul 10 00:23:22.879398 kernel: hv_utils: TimeSync IC version 4.0 Jul 10 00:23:22.879429 kernel: hv_utils: Heartbeat IC version 3.0 Jul 10 00:23:22.464228 systemd-resolved[233]: Clock change detected. Flushing caches. Jul 10 00:23:22.470409 systemd-journald[206]: Time jumped backwards, rotating. Jul 10 00:23:22.470454 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 10 00:23:22.470579 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 10 00:23:22.476584 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 00:23:22.476437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:22.481279 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 10 00:23:22.488095 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 10 00:23:22.488143 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 10 00:23:22.491186 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 10 00:23:22.500097 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:23:22.502900 kernel: hv_vmbus: registering driver hv_storvsc Jul 10 00:23:22.502924 kernel: hv_vmbus: registering driver hv_netvsc Jul 10 00:23:22.507101 kernel: hv_vmbus: registering driver hid_hyperv Jul 10 00:23:22.507141 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 10 00:23:22.513408 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 10 00:23:22.514169 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 00:23:22.516262 kernel: scsi host0: storvsc_host_t Jul 10 00:23:22.516413 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 10 00:23:22.518210 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 10 00:23:22.522670 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 10 00:23:22.531099 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d45a4d9 (unnamed net_device) (uninitialized): VF slot 1 added Jul 10 00:23:22.544775 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 10 00:23:22.544979 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:23:22.548112 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 10 00:23:22.557875 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 10 00:23:22.558067 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 10 00:23:22.569101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#90 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:23:22.583102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#115 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:23:22.815108 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 10 00:23:22.820111 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:23.026107 kernel: nvme nvme0: using unchecked data buffer Jul 10 00:23:23.182957 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 10 00:23:23.204258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 10 00:23:23.230005 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 10 00:23:23.231375 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 10 00:23:23.239253 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:23:23.254062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 10 00:23:23.269690 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:23.269717 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:23.259716 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:23:23.276791 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:23:23.281173 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:23:23.284041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:23:23.289320 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:23:23.319006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:23:23.548974 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 10 00:23:23.549235 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 10 00:23:23.551830 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 10 00:23:23.553350 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 00:23:23.558295 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 10 00:23:23.562196 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 10 00:23:23.567225 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 10 00:23:23.567247 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 10 00:23:23.585113 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 00:23:23.585297 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 10 00:23:23.589223 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 10 00:23:23.593764 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 10 00:23:23.602099 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 10 00:23:23.606296 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d45a4d9 eth0: VF registering: eth1 Jul 10 00:23:23.606460 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 10 00:23:23.611099 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 10 00:23:24.278146 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:24.278291 disk-uuid[673]: The operation has completed successfully. Jul 10 00:23:24.335279 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:23:24.335365 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:23:24.369208 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:23:24.384205 sh[717]: Success Jul 10 00:23:24.411568 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:23:24.411615 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:23:24.412100 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:23:24.422099 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:23:24.601618 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:23:24.606163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:23:24.619656 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:23:24.633096 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:23:24.636120 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (730) Jul 10 00:23:24.639153 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:23:24.639243 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:24.640275 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:23:24.961723 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:23:24.963261 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:23:24.963599 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:23:24.965200 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:23:24.967192 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:23:25.002115 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (767) Jul 10 00:23:25.008787 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:25.008825 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:25.008836 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:25.028102 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:25.028558 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:23:25.034200 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:23:25.052550 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:23:25.054337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:23:25.097434 systemd-networkd[899]: lo: Link UP Jul 10 00:23:25.097441 systemd-networkd[899]: lo: Gained carrier Jul 10 00:23:25.107161 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 10 00:23:25.107401 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:23:25.098963 systemd-networkd[899]: Enumeration completed Jul 10 00:23:25.109589 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d45a4d9 eth0: Data path switched to VF: enP30832s1 Jul 10 00:23:25.099035 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:23:25.099417 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:25.099420 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:23:25.102787 systemd[1]: Reached target network.target - Network. Jul 10 00:23:25.110960 systemd-networkd[899]: enP30832s1: Link UP Jul 10 00:23:25.111027 systemd-networkd[899]: eth0: Link UP Jul 10 00:23:25.111194 systemd-networkd[899]: eth0: Gained carrier Jul 10 00:23:25.111203 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:25.121950 systemd-networkd[899]: enP30832s1: Gained carrier Jul 10 00:23:25.133131 systemd-networkd[899]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:23:25.834923 ignition[869]: Ignition 2.21.0 Jul 10 00:23:25.834935 ignition[869]: Stage: fetch-offline Jul 10 00:23:25.837071 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:23:25.835019 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:25.841346 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:23:25.835026 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:25.835123 ignition[869]: parsed url from cmdline: "" Jul 10 00:23:25.835125 ignition[869]: no config URL provided Jul 10 00:23:25.835129 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:23:25.835135 ignition[869]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:23:25.835140 ignition[869]: failed to fetch config: resource requires networking Jul 10 00:23:25.835364 ignition[869]: Ignition finished successfully Jul 10 00:23:25.862150 ignition[909]: Ignition 2.21.0 Jul 10 00:23:25.862154 ignition[909]: Stage: fetch Jul 10 00:23:25.862370 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:25.862375 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:25.862443 ignition[909]: parsed url from cmdline: "" Jul 10 00:23:25.862445 ignition[909]: no config URL provided Jul 10 00:23:25.862448 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:23:25.862451 ignition[909]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:23:25.862474 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 10 00:23:25.936284 ignition[909]: GET result: OK Jul 10 00:23:25.936365 ignition[909]: config has been read from IMDS userdata Jul 10 00:23:25.936394 ignition[909]: parsing config with SHA512: 04ad17fb34268106b65bc76386ae4d8eff079c0f72117e36e71c60566da389eca16473b36aa77170d1ba0bb4638d7813de82eb65abcc5a34fb3ed2c923cb0223 Jul 10 00:23:25.943664 unknown[909]: fetched base config from "system" Jul 10 00:23:25.943673 unknown[909]: fetched base config from "system" Jul 10 00:23:25.944004 ignition[909]: fetch: fetch complete Jul 10 00:23:25.943678 unknown[909]: fetched user config from "azure" Jul 10 00:23:25.944009 ignition[909]: fetch: fetch passed Jul 10 00:23:25.946850 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:23:25.944047 ignition[909]: Ignition finished successfully Jul 10 00:23:25.948689 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:23:25.974803 ignition[915]: Ignition 2.21.0 Jul 10 00:23:25.974814 ignition[915]: Stage: kargs Jul 10 00:23:25.975002 ignition[915]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:25.977523 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:23:25.975010 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:25.979889 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:23:25.975763 ignition[915]: kargs: kargs passed Jul 10 00:23:25.975796 ignition[915]: Ignition finished successfully Jul 10 00:23:26.001792 ignition[921]: Ignition 2.21.0 Jul 10 00:23:26.001802 ignition[921]: Stage: disks Jul 10 00:23:26.002002 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:26.003704 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:23:26.002009 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:26.002897 ignition[921]: disks: disks passed Jul 10 00:23:26.002928 ignition[921]: Ignition finished successfully Jul 10 00:23:26.009721 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:23:26.011214 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:23:26.011642 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:23:26.011670 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:23:26.011691 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:23:26.014986 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:23:26.077822 systemd-fsck[929]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 10 00:23:26.083744 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:23:26.088976 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:23:26.333356 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:23:26.333982 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:23:26.336574 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:23:26.353700 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:23:26.368098 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:23:26.374544 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 00:23:26.379455 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:23:26.380463 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:23:26.392754 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (938) Jul 10 00:23:26.392775 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:26.392789 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:26.392799 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:26.391881 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:23:26.398941 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:23:26.405190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:23:26.629247 systemd-networkd[899]: eth0: Gained IPv6LL Jul 10 00:23:26.765136 coreos-metadata[940]: Jul 10 00:23:26.765 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 00:23:26.769410 coreos-metadata[940]: Jul 10 00:23:26.769 INFO Fetch successful Jul 10 00:23:26.770540 coreos-metadata[940]: Jul 10 00:23:26.769 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 10 00:23:26.786307 coreos-metadata[940]: Jul 10 00:23:26.786 INFO Fetch successful Jul 10 00:23:26.799377 coreos-metadata[940]: Jul 10 00:23:26.799 INFO wrote hostname ci-4344.1.1-n-4eb7f9ac8a to /sysroot/etc/hostname Jul 10 00:23:26.802055 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:23:26.935428 initrd-setup-root[968]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:23:26.963906 initrd-setup-root[975]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:23:26.979061 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:23:26.983559 initrd-setup-root[989]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:23:27.077223 systemd-networkd[899]: enP30832s1: Gained IPv6LL Jul 10 00:23:27.716212 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:23:27.721377 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:23:27.735206 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:23:27.742683 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:23:27.746353 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:27.770780 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:23:27.774274 ignition[1056]: INFO : Ignition 2.21.0 Jul 10 00:23:27.774274 ignition[1056]: INFO : Stage: mount Jul 10 00:23:27.774274 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:27.774274 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:27.774274 ignition[1056]: INFO : mount: mount passed Jul 10 00:23:27.786988 ignition[1056]: INFO : Ignition finished successfully Jul 10 00:23:27.775407 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:23:27.780570 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:23:27.795420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:23:27.829810 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1069) Jul 10 00:23:27.829856 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:27.831093 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:27.832611 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:27.837920 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:23:27.860203 ignition[1086]: INFO : Ignition 2.21.0 Jul 10 00:23:27.860203 ignition[1086]: INFO : Stage: files Jul 10 00:23:27.865117 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:27.865117 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:27.865117 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:23:27.880160 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:23:27.880160 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:23:27.897288 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:23:27.899477 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:23:27.899477 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:23:27.897615 unknown[1086]: wrote ssh authorized keys file for user: core Jul 10 00:23:27.933962 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:23:27.936733 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 00:23:57.934556 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz": dial tcp 13.107.246.52:443: i/o timeout Jul 10 00:23:58.134951 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #2 Jul 10 00:23:58.198294 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:23:58.395745 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:23:58.398792 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:23:58.398792 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:23:58.398792 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:23:58.398792 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:23:58.398792 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:23:58.398792 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:23:58.398792 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:23:58.398792 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:23:58.421110 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:23:58.421110 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:23:58.421110 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:23:58.421110 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:23:58.421110 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:23:58.421110 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 00:23:59.223170 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:23:59.834482 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:23:59.834482 ignition[1086]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 00:23:59.849639 ignition[1086]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:23:59.856240 ignition[1086]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:23:59.856240 ignition[1086]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 00:23:59.860446 ignition[1086]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:23:59.860446 ignition[1086]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:23:59.860446 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:23:59.860446 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:23:59.860446 ignition[1086]: INFO : files: files passed Jul 10 00:23:59.860446 ignition[1086]: INFO : Ignition finished successfully Jul 10 00:23:59.862526 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:23:59.868409 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:23:59.876273 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:23:59.892339 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:23:59.892916 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:23:59.900931 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:59.900931 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:59.899519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:23:59.910165 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:59.908560 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:23:59.916012 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:23:59.955382 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:23:59.955472 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:23:59.958575 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:23:59.959499 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:23:59.959758 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:23:59.960416 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:23:59.995183 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:23:59.998232 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:24:00.017686 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:24:00.017957 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:24:00.018701 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:24:00.019015 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:24:00.019147 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:24:00.027368 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:24:00.031226 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:24:00.032820 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:24:00.036504 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:24:00.041261 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:24:00.045215 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:24:00.049223 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:24:00.051435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:24:00.055234 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:24:00.059237 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:24:00.061933 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:24:00.066196 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:24:00.067652 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:24:00.076303 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:24:00.079208 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:24:00.082350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:24:00.082572 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:24:00.082644 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:24:00.082748 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:24:00.093815 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:24:00.093949 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:24:00.098961 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:24:00.099063 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:24:00.104433 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 00:24:00.104547 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:24:00.109658 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:24:00.112973 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:24:00.113108 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:24:00.125500 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:24:00.129406 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:24:00.129594 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:24:00.140304 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:24:00.140436 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:24:00.148196 ignition[1140]: INFO : Ignition 2.21.0 Jul 10 00:24:00.148196 ignition[1140]: INFO : Stage: umount Jul 10 00:24:00.148196 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:24:00.148196 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:24:00.148196 ignition[1140]: INFO : umount: umount passed Jul 10 00:24:00.148196 ignition[1140]: INFO : Ignition finished successfully Jul 10 00:24:00.149579 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:24:00.150420 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:24:00.155877 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:24:00.155957 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:24:00.169501 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:24:00.169834 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:24:00.169866 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:24:00.175757 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:24:00.175798 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:24:00.180155 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:24:00.180192 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:24:00.180410 systemd[1]: Stopped target network.target - Network. Jul 10 00:24:00.180434 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:24:00.180460 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:24:00.180747 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:24:00.189809 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:24:00.194769 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:24:00.209130 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:24:00.210290 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:24:00.211691 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:24:00.211724 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:24:00.214419 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:24:00.214444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:24:00.218256 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:24:00.218291 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:24:00.223912 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:24:00.223946 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:24:00.227245 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:24:00.233759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:24:00.235611 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:24:00.235685 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:24:00.238747 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:24:00.238838 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:24:00.245582 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:24:00.245775 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:24:00.245873 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:24:00.256794 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:24:00.258399 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:24:00.262159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:24:00.262196 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:24:00.266153 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:24:00.266208 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:24:00.272631 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:24:00.275900 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:24:00.275947 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:24:00.280177 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:24:00.280224 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:24:00.284291 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:24:00.284334 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:24:00.286806 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:24:00.310779 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d45a4d9 eth0: Data path switched from VF: enP30832s1 Jul 10 00:24:00.311940 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:24:00.286849 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:24:00.288697 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:24:00.289789 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:24:00.289844 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:24:00.299338 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:24:00.299466 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:24:00.302526 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:24:00.302625 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:24:00.314506 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:24:00.314539 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:24:00.317717 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:24:00.317764 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:24:00.333464 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:24:00.333519 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:24:00.338199 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:24:00.338249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:24:00.343190 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:24:00.345372 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:24:00.345899 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:24:00.346174 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:24:00.346215 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:24:00.346742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:24:00.346777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:24:00.360434 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:24:00.360481 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:24:00.360514 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:24:00.360764 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:24:00.365572 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:24:00.368334 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:24:00.368409 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:24:00.370332 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:24:00.376706 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:24:00.501336 systemd[1]: Switching root. Jul 10 00:24:00.575917 systemd-journald[206]: Journal stopped Jul 10 00:24:12.783589 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Jul 10 00:24:12.783631 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:24:12.783647 kernel: SELinux: policy capability open_perms=1 Jul 10 00:24:12.783658 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:24:12.783668 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:24:12.783679 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:24:12.783696 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:24:12.783710 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:24:12.783720 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:24:12.783730 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:24:12.783740 kernel: audit: type=1403 audit(1752107048.187:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:24:12.783753 systemd[1]: Successfully loaded SELinux policy in 184.232ms. Jul 10 00:24:12.783766 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.351ms. Jul 10 00:24:12.783784 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:24:12.783797 systemd[1]: Detected virtualization microsoft. Jul 10 00:24:12.783809 systemd[1]: Detected architecture x86-64. Jul 10 00:24:12.783822 systemd[1]: Detected first boot. Jul 10 00:24:12.783835 systemd[1]: Hostname set to . Jul 10 00:24:12.783850 systemd[1]: Initializing machine ID from random generator. Jul 10 00:24:12.783862 zram_generator::config[1184]: No configuration found. Jul 10 00:24:12.783875 kernel: Guest personality initialized and is inactive Jul 10 00:24:12.783886 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 10 00:24:12.783900 kernel: Initialized host personality Jul 10 00:24:12.783910 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:24:12.783921 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:24:12.783939 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:24:12.783951 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:24:12.783966 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:24:12.783980 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:24:12.783994 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:24:12.784006 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:24:12.784018 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:24:12.784032 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:24:12.784045 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:24:12.784056 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:24:12.784069 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:24:12.784104 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:24:12.784116 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:24:12.784127 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:24:12.784137 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:24:12.784149 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:24:12.784161 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:24:12.784173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:24:12.784184 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:24:12.784194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:24:12.784205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:24:12.784215 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:24:12.784225 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:24:12.784238 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:24:12.784248 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:24:12.784258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:24:12.784268 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:24:12.784278 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:24:12.784288 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:24:12.784298 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:24:12.784308 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:24:12.784321 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:24:12.784332 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:24:12.784342 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:24:12.784353 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:24:12.784363 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:24:12.784376 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:24:12.784387 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:24:12.784397 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:24:12.784408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:24:12.784419 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:24:12.784430 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:24:12.784441 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:24:12.784452 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:24:12.784465 systemd[1]: Reached target machines.target - Containers. Jul 10 00:24:12.784476 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:24:12.784487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:24:12.784498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:24:12.784509 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:24:12.784520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:24:12.784531 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:24:12.784542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:24:12.784552 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:24:12.784565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:24:12.784576 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:24:12.784587 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:24:12.784597 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:24:12.784609 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:24:12.784619 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:24:12.784631 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:24:12.784642 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:24:12.784655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:24:12.784666 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:24:12.784677 kernel: loop: module loaded Jul 10 00:24:12.784687 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:24:12.784720 systemd-journald[1262]: Collecting audit messages is disabled. Jul 10 00:24:12.784747 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:24:12.784760 systemd-journald[1262]: Journal started Jul 10 00:24:12.784786 systemd-journald[1262]: Runtime Journal (/run/log/journal/fec5405e04f444bf96e984b269570ba9) is 8M, max 158.9M, 150.9M free. Jul 10 00:24:12.137569 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:24:12.152725 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 10 00:24:12.153092 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:24:12.801162 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:24:12.806108 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:24:12.809759 kernel: fuse: init (API version 7.41) Jul 10 00:24:12.809802 systemd[1]: Stopped verity-setup.service. Jul 10 00:24:12.818100 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:24:12.829406 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:24:12.826665 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:24:12.829120 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:24:12.831439 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:24:12.833703 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:24:12.838319 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:24:12.842316 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:24:12.846434 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:24:12.849235 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:24:12.849415 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:24:12.853437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:24:12.853606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:24:12.856339 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:24:12.857125 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:24:12.859552 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:24:12.859696 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:24:12.863407 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:24:12.863569 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:24:12.868192 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:24:12.870460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:24:12.872838 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:24:12.884173 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:24:12.891476 kernel: ACPI: bus type drm_connector registered Jul 10 00:24:12.891859 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:24:12.895631 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:24:12.897613 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:24:12.897719 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:24:12.900646 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:24:12.904190 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:24:13.146330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:24:13.162228 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:24:13.168249 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:24:13.172245 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:24:13.173184 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:24:13.176199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:24:13.180273 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:24:13.185205 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:24:13.188847 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:24:13.192421 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:24:13.192956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:24:13.195880 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:24:13.199274 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:24:13.202383 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:24:13.211173 systemd-journald[1262]: Time spent on flushing to /var/log/journal/fec5405e04f444bf96e984b269570ba9 is 144.568ms for 987 entries. Jul 10 00:24:13.211173 systemd-journald[1262]: System Journal (/var/log/journal/fec5405e04f444bf96e984b269570ba9) is 11.8M, max 2.6G, 2.6G free. Jul 10 00:24:14.257509 systemd-journald[1262]: Received client request to flush runtime journal. Jul 10 00:24:14.257567 kernel: loop0: detected capacity change from 0 to 28496 Jul 10 00:24:14.257589 systemd-journald[1262]: /var/log/journal/fec5405e04f444bf96e984b269570ba9/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 10 00:24:14.257618 systemd-journald[1262]: Rotating system journal. Jul 10 00:24:13.215545 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:24:13.237018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:24:13.355816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:24:13.492533 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:24:13.499127 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:24:13.501708 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:24:13.506258 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:24:13.510186 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:24:13.908759 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 10 00:24:13.908779 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 10 00:24:13.913639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:24:14.261410 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:24:14.269033 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:24:14.270482 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:24:14.371110 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:24:14.762100 kernel: loop1: detected capacity change from 0 to 146240 Jul 10 00:24:16.217109 kernel: loop2: detected capacity change from 0 to 113872 Jul 10 00:24:16.233907 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:24:16.237071 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:24:16.264876 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Jul 10 00:24:16.368494 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:24:16.374219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:24:16.425235 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:24:16.450817 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:24:16.510105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:24:16.531643 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:24:16.554102 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:24:16.558099 kernel: hv_vmbus: registering driver hv_balloon Jul 10 00:24:16.561097 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 10 00:24:16.572099 kernel: hv_vmbus: registering driver hyperv_fb Jul 10 00:24:16.574105 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 10 00:24:16.577103 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 10 00:24:16.578185 kernel: Console: switching to colour dummy device 80x25 Jul 10 00:24:16.583104 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 00:24:16.616102 kernel: loop3: detected capacity change from 0 to 221472 Jul 10 00:24:16.644160 kernel: loop4: detected capacity change from 0 to 28496 Jul 10 00:24:16.671101 kernel: loop5: detected capacity change from 0 to 146240 Jul 10 00:24:16.672416 systemd-networkd[1357]: lo: Link UP Jul 10 00:24:16.672428 systemd-networkd[1357]: lo: Gained carrier Jul 10 00:24:16.675284 systemd-networkd[1357]: Enumeration completed Jul 10 00:24:16.675504 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:24:16.676112 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:24:16.676195 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:24:16.678101 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 10 00:24:16.679853 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:24:16.687550 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:24:16.683467 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:24:16.691807 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d45a4d9 eth0: Data path switched to VF: enP30832s1 Jul 10 00:24:16.693896 systemd-networkd[1357]: enP30832s1: Link UP Jul 10 00:24:16.694147 systemd-networkd[1357]: eth0: Link UP Jul 10 00:24:16.694567 systemd-networkd[1357]: eth0: Gained carrier Jul 10 00:24:16.694644 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:24:16.699336 systemd-networkd[1357]: enP30832s1: Gained carrier Jul 10 00:24:16.706118 kernel: loop6: detected capacity change from 0 to 113872 Jul 10 00:24:16.707516 systemd-networkd[1357]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:24:16.721390 kernel: loop7: detected capacity change from 0 to 221472 Jul 10 00:24:16.727841 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:24:16.739772 (sd-merge)[1421]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 10 00:24:16.741147 (sd-merge)[1421]: Merged extensions into '/usr'. Jul 10 00:24:16.749162 systemd[1]: Reload requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:24:16.749180 systemd[1]: Reloading... Jul 10 00:24:16.905133 zram_generator::config[1459]: No configuration found. Jul 10 00:24:16.966100 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 10 00:24:17.053094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:24:17.146844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 10 00:24:17.150577 systemd[1]: Reloading finished in 401 ms. Jul 10 00:24:17.167948 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:24:17.205015 systemd[1]: Starting ensure-sysext.service... Jul 10 00:24:17.209240 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:24:17.221378 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:24:17.226036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:24:17.242759 systemd[1]: Reload requested from client PID 1525 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:24:17.242773 systemd[1]: Reloading... Jul 10 00:24:17.253841 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:24:17.254052 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:24:17.254359 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:24:17.254579 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:24:17.255278 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:24:17.255512 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Jul 10 00:24:17.255558 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Jul 10 00:24:17.260216 systemd-tmpfiles[1527]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:24:17.260228 systemd-tmpfiles[1527]: Skipping /boot Jul 10 00:24:17.269845 systemd-tmpfiles[1527]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:24:17.269861 systemd-tmpfiles[1527]: Skipping /boot Jul 10 00:24:17.303108 zram_generator::config[1560]: No configuration found. Jul 10 00:24:17.404014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:24:17.498584 systemd[1]: Reloading finished in 255 ms. Jul 10 00:24:17.529716 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:24:17.530789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:24:17.536624 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:24:17.539226 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:24:17.544508 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:24:17.552151 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:24:17.556219 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:24:17.561926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:24:17.562421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:24:17.565514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:24:17.569463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:24:17.571341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:24:17.571822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:24:17.571917 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:24:17.572009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:24:17.576674 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:24:17.577309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:24:17.577479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:24:17.577559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:24:17.577634 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:24:17.585018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:24:17.586232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:24:17.587646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:24:17.587986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:24:17.595910 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:24:17.596518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:24:17.596624 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:24:17.596778 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:24:17.597051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:24:17.597905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:24:17.599735 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:24:17.609135 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:24:17.609693 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:24:17.609834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:24:17.615558 systemd[1]: Finished ensure-sysext.service. Jul 10 00:24:17.616355 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:24:17.618579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:24:17.618713 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:24:17.631385 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:24:17.631568 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:24:17.680172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:24:17.681886 systemd-resolved[1628]: Positive Trust Anchors: Jul 10 00:24:17.682127 systemd-resolved[1628]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:24:17.682207 systemd-resolved[1628]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:24:17.685782 systemd-resolved[1628]: Using system hostname 'ci-4344.1.1-n-4eb7f9ac8a'. Jul 10 00:24:17.687568 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:24:17.691228 systemd[1]: Reached target network.target - Network. Jul 10 00:24:17.692657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:24:17.709462 augenrules[1664]: No rules Jul 10 00:24:17.710302 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:24:17.710490 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:24:18.056680 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:24:18.059339 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:24:18.533231 systemd-networkd[1357]: enP30832s1: Gained IPv6LL Jul 10 00:24:18.597214 systemd-networkd[1357]: eth0: Gained IPv6LL Jul 10 00:24:18.600113 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:24:18.604320 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:24:19.270583 ldconfig[1310]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:24:19.285534 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:24:19.289384 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:24:19.305971 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:24:19.309294 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:24:19.310698 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:24:19.312168 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:24:19.313635 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:24:19.315394 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:24:19.318234 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:24:19.321135 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:24:19.322654 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:24:19.322679 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:24:19.325130 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:24:19.339813 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:24:19.342491 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:24:19.345551 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:24:19.348293 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:24:19.349927 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:24:19.361531 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:24:19.364439 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:24:19.366792 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:24:19.368867 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:24:19.371128 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:24:19.372417 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:24:19.372443 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:24:19.374401 systemd[1]: Starting chronyd.service - NTP client/server... Jul 10 00:24:19.378319 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:24:19.383206 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:24:19.387258 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:24:19.393195 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:24:19.398266 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:24:19.402199 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:24:19.404167 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:24:19.405122 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:24:19.408233 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 10 00:24:19.410150 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 10 00:24:19.413214 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 10 00:24:19.418314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:19.424319 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:24:19.429196 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:24:19.431993 jq[1682]: false Jul 10 00:24:19.435969 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:24:19.435242 KVP[1688]: KVP starting; pid is:1688 Jul 10 00:24:19.445727 kernel: hv_utils: KVP IC version 4.0 Jul 10 00:24:19.443423 KVP[1688]: KVP LIC Version: 3.1 Jul 10 00:24:19.443598 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:24:19.448618 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:24:19.455464 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Refreshing passwd entry cache Jul 10 00:24:19.455680 oslogin_cache_refresh[1687]: Refreshing passwd entry cache Jul 10 00:24:19.455902 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:24:19.460964 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:24:19.462428 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:24:19.467053 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:24:19.469805 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:24:19.476110 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Failure getting users, quitting Jul 10 00:24:19.476110 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:24:19.476110 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Refreshing group entry cache Jul 10 00:24:19.475823 oslogin_cache_refresh[1687]: Failure getting users, quitting Jul 10 00:24:19.475839 oslogin_cache_refresh[1687]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:24:19.475876 oslogin_cache_refresh[1687]: Refreshing group entry cache Jul 10 00:24:19.479624 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:24:19.482569 extend-filesystems[1686]: Found /dev/nvme0n1p6 Jul 10 00:24:19.483534 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:24:19.483724 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:24:19.496986 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Failure getting groups, quitting Jul 10 00:24:19.496986 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:24:19.493402 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:24:19.491253 oslogin_cache_refresh[1687]: Failure getting groups, quitting Jul 10 00:24:19.493598 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:24:19.491264 oslogin_cache_refresh[1687]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:24:19.496435 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:24:19.496621 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:24:19.506729 extend-filesystems[1686]: Found /dev/nvme0n1p9 Jul 10 00:24:19.528348 extend-filesystems[1686]: Checking size of /dev/nvme0n1p9 Jul 10 00:24:19.533253 jq[1702]: true Jul 10 00:24:19.539424 update_engine[1701]: I20250710 00:24:19.539023 1701 main.cc:92] Flatcar Update Engine starting Jul 10 00:24:19.539775 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:24:19.540241 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:24:19.543865 (chronyd)[1677]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 10 00:24:19.548209 (ntainerd)[1724]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:24:19.555995 jq[1728]: true Jul 10 00:24:19.568775 chronyd[1737]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 10 00:24:19.572628 extend-filesystems[1686]: Old size kept for /dev/nvme0n1p9 Jul 10 00:24:19.572528 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:24:19.574349 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:24:19.578565 chronyd[1737]: Timezone right/UTC failed leap second check, ignoring Jul 10 00:24:19.579834 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:24:19.578722 chronyd[1737]: Loaded seccomp filter (level 2) Jul 10 00:24:19.583452 systemd[1]: Started chronyd.service - NTP client/server. Jul 10 00:24:19.603826 tar[1710]: linux-amd64/helm Jul 10 00:24:19.628606 systemd-logind[1699]: New seat seat0. Jul 10 00:24:19.632499 dbus-daemon[1680]: [system] SELinux support is enabled Jul 10 00:24:19.633400 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:24:19.636465 systemd-logind[1699]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:24:19.639035 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:24:19.641809 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:24:19.641841 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:24:19.646206 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:24:19.646232 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:24:19.647207 update_engine[1701]: I20250710 00:24:19.647037 1701 update_check_scheduler.cc:74] Next update check in 8m14s Jul 10 00:24:19.655035 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:24:19.662338 dbus-daemon[1680]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 00:24:19.663867 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:24:19.691901 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:24:19.692412 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:24:19.696015 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:24:19.699157 coreos-metadata[1679]: Jul 10 00:24:19.699 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 00:24:19.707304 coreos-metadata[1679]: Jul 10 00:24:19.706 INFO Fetch successful Jul 10 00:24:19.707304 coreos-metadata[1679]: Jul 10 00:24:19.707 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 10 00:24:19.710906 coreos-metadata[1679]: Jul 10 00:24:19.710 INFO Fetch successful Jul 10 00:24:19.710906 coreos-metadata[1679]: Jul 10 00:24:19.710 INFO Fetching http://168.63.129.16/machine/905c4e59-d949-488a-a673-977ae5214ac1/ad335d83%2D6eb0%2D474a%2D8558%2D1094fca1676d.%5Fci%2D4344.1.1%2Dn%2D4eb7f9ac8a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 10 00:24:19.712679 coreos-metadata[1679]: Jul 10 00:24:19.712 INFO Fetch successful Jul 10 00:24:19.713852 coreos-metadata[1679]: Jul 10 00:24:19.712 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 10 00:24:19.727306 coreos-metadata[1679]: Jul 10 00:24:19.725 INFO Fetch successful Jul 10 00:24:19.769343 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:24:19.773473 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:24:20.042834 locksmithd[1767]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:24:20.416662 tar[1710]: linux-amd64/LICENSE Jul 10 00:24:20.416662 tar[1710]: linux-amd64/README.md Jul 10 00:24:20.432530 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:24:20.478970 sshd_keygen[1730]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:24:20.480739 containerd[1724]: time="2025-07-10T00:24:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:24:20.481383 containerd[1724]: time="2025-07-10T00:24:20.481344860Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:24:20.501543 containerd[1724]: time="2025-07-10T00:24:20.501509428Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.247µs" Jul 10 00:24:20.501543 containerd[1724]: time="2025-07-10T00:24:20.501540207Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:24:20.501630 containerd[1724]: time="2025-07-10T00:24:20.501558389Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:24:20.501684 containerd[1724]: time="2025-07-10T00:24:20.501670262Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:24:20.501711 containerd[1724]: time="2025-07-10T00:24:20.501693338Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:24:20.501729 containerd[1724]: time="2025-07-10T00:24:20.501714645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:24:20.501772 containerd[1724]: time="2025-07-10T00:24:20.501759116Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:24:20.501792 containerd[1724]: time="2025-07-10T00:24:20.501772573Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:24:20.501995 containerd[1724]: time="2025-07-10T00:24:20.501977292Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:24:20.502020 containerd[1724]: time="2025-07-10T00:24:20.501995314Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:24:20.502020 containerd[1724]: time="2025-07-10T00:24:20.502011866Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:24:20.502058 containerd[1724]: time="2025-07-10T00:24:20.502021147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:24:20.503183 containerd[1724]: time="2025-07-10T00:24:20.503156449Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:24:20.503393 containerd[1724]: time="2025-07-10T00:24:20.503375338Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:24:20.503422 containerd[1724]: time="2025-07-10T00:24:20.503410593Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:24:20.503446 containerd[1724]: time="2025-07-10T00:24:20.503422285Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:24:20.503465 containerd[1724]: time="2025-07-10T00:24:20.503455787Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:24:20.503779 containerd[1724]: time="2025-07-10T00:24:20.503764088Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:24:20.503839 containerd[1724]: time="2025-07-10T00:24:20.503828339Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:24:20.505861 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:24:20.510246 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:24:20.516339 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 10 00:24:20.520376 containerd[1724]: time="2025-07-10T00:24:20.520342008Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520462226Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520480170Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520492473Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520537739Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520549115Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520566028Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520605929Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520621008Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520631448Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520641002Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520672284Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520787776Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520804342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:24:20.520898 containerd[1724]: time="2025-07-10T00:24:20.520828584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:24:20.521192 containerd[1724]: time="2025-07-10T00:24:20.520839879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:24:20.521192 containerd[1724]: time="2025-07-10T00:24:20.520849600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:24:20.521192 containerd[1724]: time="2025-07-10T00:24:20.520861349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:24:20.521192 containerd[1724]: time="2025-07-10T00:24:20.520872645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:24:20.521192 containerd[1724]: time="2025-07-10T00:24:20.520882662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:24:20.522153 containerd[1724]: time="2025-07-10T00:24:20.522016901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:24:20.522153 containerd[1724]: time="2025-07-10T00:24:20.522047442Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:24:20.522153 containerd[1724]: time="2025-07-10T00:24:20.522061389Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:24:20.522423 containerd[1724]: time="2025-07-10T00:24:20.522364931Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:24:20.522423 containerd[1724]: time="2025-07-10T00:24:20.522385640Z" level=info msg="Start snapshots syncer" Jul 10 00:24:20.522615 containerd[1724]: time="2025-07-10T00:24:20.522600186Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:24:20.525095 containerd[1724]: time="2025-07-10T00:24:20.524666890Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:24:20.525095 containerd[1724]: time="2025-07-10T00:24:20.524738766Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:24:20.525265 containerd[1724]: time="2025-07-10T00:24:20.524888285Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:24:20.525265 containerd[1724]: time="2025-07-10T00:24:20.524986886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:24:20.525265 containerd[1724]: time="2025-07-10T00:24:20.525004659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:24:20.525265 containerd[1724]: time="2025-07-10T00:24:20.525015898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:24:20.525265 containerd[1724]: time="2025-07-10T00:24:20.525025989Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:24:20.525265 containerd[1724]: time="2025-07-10T00:24:20.525036548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:24:20.525265 containerd[1724]: time="2025-07-10T00:24:20.525046785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:24:20.525265 containerd[1724]: time="2025-07-10T00:24:20.525058191Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:24:20.525457 containerd[1724]: time="2025-07-10T00:24:20.525442086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:24:20.525519 containerd[1724]: time="2025-07-10T00:24:20.525510822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:24:20.525560 containerd[1724]: time="2025-07-10T00:24:20.525552691Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:24:20.525713 containerd[1724]: time="2025-07-10T00:24:20.525702986Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:24:20.526256 containerd[1724]: time="2025-07-10T00:24:20.526150734Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:24:20.526256 containerd[1724]: time="2025-07-10T00:24:20.526183400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:24:20.526256 containerd[1724]: time="2025-07-10T00:24:20.526195729Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:24:20.526256 containerd[1724]: time="2025-07-10T00:24:20.526205978Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:24:20.526256 containerd[1724]: time="2025-07-10T00:24:20.526216759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:24:20.526256 containerd[1724]: time="2025-07-10T00:24:20.526228505Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:24:20.526485 containerd[1724]: time="2025-07-10T00:24:20.526244634Z" level=info msg="runtime interface created" Jul 10 00:24:20.526485 containerd[1724]: time="2025-07-10T00:24:20.526417737Z" level=info msg="created NRI interface" Jul 10 00:24:20.526485 containerd[1724]: time="2025-07-10T00:24:20.526431547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:24:20.526485 containerd[1724]: time="2025-07-10T00:24:20.526446048Z" level=info msg="Connect containerd service" Jul 10 00:24:20.526844 containerd[1724]: time="2025-07-10T00:24:20.526769572Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:24:20.533269 containerd[1724]: time="2025-07-10T00:24:20.533242976Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:24:20.537454 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:24:20.539343 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:24:20.544048 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:24:20.549284 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 10 00:24:20.564868 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:24:20.570725 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:24:20.575398 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:24:20.577433 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:24:21.054820 containerd[1724]: time="2025-07-10T00:24:21.054683857Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:24:21.054820 containerd[1724]: time="2025-07-10T00:24:21.054746238Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:24:21.054820 containerd[1724]: time="2025-07-10T00:24:21.054762880Z" level=info msg="Start subscribing containerd event" Jul 10 00:24:21.054820 containerd[1724]: time="2025-07-10T00:24:21.054789962Z" level=info msg="Start recovering state" Jul 10 00:24:21.054964 containerd[1724]: time="2025-07-10T00:24:21.054874002Z" level=info msg="Start event monitor" Jul 10 00:24:21.054964 containerd[1724]: time="2025-07-10T00:24:21.054887588Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:24:21.054964 containerd[1724]: time="2025-07-10T00:24:21.054894241Z" level=info msg="Start streaming server" Jul 10 00:24:21.054964 containerd[1724]: time="2025-07-10T00:24:21.054903187Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:24:21.054964 containerd[1724]: time="2025-07-10T00:24:21.054910305Z" level=info msg="runtime interface starting up..." Jul 10 00:24:21.054964 containerd[1724]: time="2025-07-10T00:24:21.054916989Z" level=info msg="starting plugins..." Jul 10 00:24:21.054964 containerd[1724]: time="2025-07-10T00:24:21.054928530Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:24:21.055120 containerd[1724]: time="2025-07-10T00:24:21.055021585Z" level=info msg="containerd successfully booted in 0.576800s" Jul 10 00:24:21.055158 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:24:21.077110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:21.080802 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:24:21.083314 systemd[1]: Startup finished in 3.294s (kernel) + 46.691s (initrd) + 13.078s (userspace) = 1min 3.064s. Jul 10 00:24:21.087325 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:21.229919 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:24:21.232053 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:24:21.238121 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:24:21.239962 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:24:21.250679 systemd-logind[1699]: New session 2 of user core. Jul 10 00:24:21.255859 systemd-logind[1699]: New session 1 of user core. Jul 10 00:24:21.276863 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:24:21.279354 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:24:21.291194 (systemd)[1856]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:24:21.295506 systemd-logind[1699]: New session c1 of user core. Jul 10 00:24:21.451516 systemd[1856]: Queued start job for default target default.target. Jul 10 00:24:21.457234 systemd[1856]: Created slice app.slice - User Application Slice. Jul 10 00:24:21.457260 systemd[1856]: Reached target paths.target - Paths. Jul 10 00:24:21.457499 systemd[1856]: Reached target timers.target - Timers. Jul 10 00:24:21.458926 systemd[1856]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:24:21.468483 systemd[1856]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:24:21.468536 systemd[1856]: Reached target sockets.target - Sockets. Jul 10 00:24:21.468571 systemd[1856]: Reached target basic.target - Basic System. Jul 10 00:24:21.468632 systemd[1856]: Reached target default.target - Main User Target. Jul 10 00:24:21.468655 systemd[1856]: Startup finished in 166ms. Jul 10 00:24:21.468734 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:24:21.474243 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:24:21.476181 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:24:21.736693 waagent[1826]: 2025-07-10T00:24:21.736575Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 10 00:24:21.737029 waagent[1826]: 2025-07-10T00:24:21.736986Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 10 00:24:21.739292 waagent[1826]: 2025-07-10T00:24:21.739212Z INFO Daemon Daemon Python: 3.11.12 Jul 10 00:24:21.740907 waagent[1826]: 2025-07-10T00:24:21.740419Z INFO Daemon Daemon Run daemon Jul 10 00:24:21.742530 waagent[1826]: 2025-07-10T00:24:21.741881Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 10 00:24:21.743782 waagent[1826]: 2025-07-10T00:24:21.743146Z INFO Daemon Daemon Using waagent for provisioning Jul 10 00:24:21.747100 waagent[1826]: 2025-07-10T00:24:21.746555Z INFO Daemon Daemon Activate resource disk Jul 10 00:24:21.748427 waagent[1826]: 2025-07-10T00:24:21.748375Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 10 00:24:21.752857 waagent[1826]: 2025-07-10T00:24:21.752803Z INFO Daemon Daemon Found device: None Jul 10 00:24:21.754786 waagent[1826]: 2025-07-10T00:24:21.754578Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 10 00:24:21.757856 waagent[1826]: 2025-07-10T00:24:21.757682Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 10 00:24:21.762004 waagent[1826]: 2025-07-10T00:24:21.761327Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 00:24:21.763217 waagent[1826]: 2025-07-10T00:24:21.763173Z INFO Daemon Daemon Running default provisioning handler Jul 10 00:24:21.771232 waagent[1826]: 2025-07-10T00:24:21.771192Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 10 00:24:21.773095 waagent[1826]: 2025-07-10T00:24:21.772184Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 10 00:24:21.773095 waagent[1826]: 2025-07-10T00:24:21.772499Z INFO Daemon Daemon cloud-init is enabled: False Jul 10 00:24:21.773095 waagent[1826]: 2025-07-10T00:24:21.772830Z INFO Daemon Daemon Copying ovf-env.xml Jul 10 00:24:21.783947 kubelet[1845]: E0710 00:24:21.783922 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:21.785799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:21.785922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:21.786276 systemd[1]: kubelet.service: Consumed 961ms CPU time, 264.6M memory peak. Jul 10 00:24:21.815913 waagent[1826]: 2025-07-10T00:24:21.814350Z INFO Daemon Daemon Successfully mounted dvd Jul 10 00:24:21.836892 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 10 00:24:21.838716 waagent[1826]: 2025-07-10T00:24:21.838670Z INFO Daemon Daemon Detect protocol endpoint Jul 10 00:24:21.844795 waagent[1826]: 2025-07-10T00:24:21.839390Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 00:24:21.844795 waagent[1826]: 2025-07-10T00:24:21.839602Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 10 00:24:21.844795 waagent[1826]: 2025-07-10T00:24:21.839832Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 10 00:24:21.844795 waagent[1826]: 2025-07-10T00:24:21.839970Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 10 00:24:21.844795 waagent[1826]: 2025-07-10T00:24:21.840164Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 10 00:24:21.856452 waagent[1826]: 2025-07-10T00:24:21.856424Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 10 00:24:21.857472 waagent[1826]: 2025-07-10T00:24:21.857044Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 10 00:24:21.857472 waagent[1826]: 2025-07-10T00:24:21.857248Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 10 00:24:21.914258 waagent[1826]: 2025-07-10T00:24:21.914205Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 10 00:24:21.914883 waagent[1826]: 2025-07-10T00:24:21.914489Z INFO Daemon Daemon Forcing an update of the goal state. Jul 10 00:24:21.919224 waagent[1826]: 2025-07-10T00:24:21.919186Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 00:24:21.930495 waagent[1826]: 2025-07-10T00:24:21.930463Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 10 00:24:21.931560 waagent[1826]: 2025-07-10T00:24:21.931052Z INFO Daemon Jul 10 00:24:21.931560 waagent[1826]: 2025-07-10T00:24:21.931203Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 1faa0382-4ebc-4843-8039-a13c5f14b7f0 eTag: 4656297625485455768 source: Fabric] Jul 10 00:24:21.931560 waagent[1826]: 2025-07-10T00:24:21.931460Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 10 00:24:21.932548 waagent[1826]: 2025-07-10T00:24:21.931738Z INFO Daemon Jul 10 00:24:21.932548 waagent[1826]: 2025-07-10T00:24:21.931877Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 10 00:24:21.935600 waagent[1826]: 2025-07-10T00:24:21.935054Z INFO Daemon Daemon Downloading artifacts profile blob Jul 10 00:24:22.024299 waagent[1826]: 2025-07-10T00:24:22.024223Z INFO Daemon Downloaded certificate {'thumbprint': '448BCAAF02C943AD71FA039339BBECCAE40953F6', 'hasPrivateKey': True} Jul 10 00:24:22.026543 waagent[1826]: 2025-07-10T00:24:22.026509Z INFO Daemon Fetch goal state completed Jul 10 00:24:22.034638 waagent[1826]: 2025-07-10T00:24:22.034593Z INFO Daemon Daemon Starting provisioning Jul 10 00:24:22.035598 waagent[1826]: 2025-07-10T00:24:22.035567Z INFO Daemon Daemon Handle ovf-env.xml. Jul 10 00:24:22.036536 waagent[1826]: 2025-07-10T00:24:22.036386Z INFO Daemon Daemon Set hostname [ci-4344.1.1-n-4eb7f9ac8a] Jul 10 00:24:22.056032 waagent[1826]: 2025-07-10T00:24:22.055995Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-n-4eb7f9ac8a] Jul 10 00:24:22.057550 waagent[1826]: 2025-07-10T00:24:22.057515Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 10 00:24:22.057827 waagent[1826]: 2025-07-10T00:24:22.057802Z INFO Daemon Daemon Primary interface is [eth0] Jul 10 00:24:22.065373 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:24:22.065379 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:24:22.065401 systemd-networkd[1357]: eth0: DHCP lease lost Jul 10 00:24:22.066259 waagent[1826]: 2025-07-10T00:24:22.066213Z INFO Daemon Daemon Create user account if not exists Jul 10 00:24:22.066864 waagent[1826]: 2025-07-10T00:24:22.066831Z INFO Daemon Daemon User core already exists, skip useradd Jul 10 00:24:22.068550 waagent[1826]: 2025-07-10T00:24:22.067437Z INFO Daemon Daemon Configure sudoer Jul 10 00:24:22.076177 waagent[1826]: 2025-07-10T00:24:22.076074Z INFO Daemon Daemon Configure sshd Jul 10 00:24:22.079804 waagent[1826]: 2025-07-10T00:24:22.079767Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 10 00:24:22.083450 waagent[1826]: 2025-07-10T00:24:22.080290Z INFO Daemon Daemon Deploy ssh public key. Jul 10 00:24:22.085007 systemd-networkd[1357]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:24:23.150251 waagent[1826]: 2025-07-10T00:24:23.150202Z INFO Daemon Daemon Provisioning complete Jul 10 00:24:23.159770 waagent[1826]: 2025-07-10T00:24:23.159740Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 10 00:24:23.160803 waagent[1826]: 2025-07-10T00:24:23.160244Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 10 00:24:23.160803 waagent[1826]: 2025-07-10T00:24:23.160430Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 10 00:24:23.258711 waagent[1909]: 2025-07-10T00:24:23.258644Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 10 00:24:23.258949 waagent[1909]: 2025-07-10T00:24:23.258735Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 10 00:24:23.258949 waagent[1909]: 2025-07-10T00:24:23.258776Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 10 00:24:23.258949 waagent[1909]: 2025-07-10T00:24:23.258816Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 10 00:24:23.290448 waagent[1909]: 2025-07-10T00:24:23.290399Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 10 00:24:23.290562 waagent[1909]: 2025-07-10T00:24:23.290540Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:24:23.290609 waagent[1909]: 2025-07-10T00:24:23.290590Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:24:23.294774 waagent[1909]: 2025-07-10T00:24:23.294724Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 00:24:23.303545 waagent[1909]: 2025-07-10T00:24:23.303516Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 10 00:24:23.303853 waagent[1909]: 2025-07-10T00:24:23.303825Z INFO ExtHandler Jul 10 00:24:23.303900 waagent[1909]: 2025-07-10T00:24:23.303876Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f0e1d084-9c1f-4d7b-a6cd-821982ec0236 eTag: 4656297625485455768 source: Fabric] Jul 10 00:24:23.304092 waagent[1909]: 2025-07-10T00:24:23.304053Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 10 00:24:23.304412 waagent[1909]: 2025-07-10T00:24:23.304387Z INFO ExtHandler Jul 10 00:24:23.304444 waagent[1909]: 2025-07-10T00:24:23.304427Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 10 00:24:23.315313 waagent[1909]: 2025-07-10T00:24:23.315290Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 10 00:24:23.408782 waagent[1909]: 2025-07-10T00:24:23.408735Z INFO ExtHandler Downloaded certificate {'thumbprint': '448BCAAF02C943AD71FA039339BBECCAE40953F6', 'hasPrivateKey': True} Jul 10 00:24:23.409138 waagent[1909]: 2025-07-10T00:24:23.409077Z INFO ExtHandler Fetch goal state completed Jul 10 00:24:23.428202 waagent[1909]: 2025-07-10T00:24:23.428157Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 10 00:24:23.432381 waagent[1909]: 2025-07-10T00:24:23.432339Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1909 Jul 10 00:24:23.432492 waagent[1909]: 2025-07-10T00:24:23.432469Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 10 00:24:23.432716 waagent[1909]: 2025-07-10T00:24:23.432696Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 10 00:24:23.433694 waagent[1909]: 2025-07-10T00:24:23.433664Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 10 00:24:23.433952 waagent[1909]: 2025-07-10T00:24:23.433929Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 10 00:24:23.434066 waagent[1909]: 2025-07-10T00:24:23.434046Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 10 00:24:23.434460 waagent[1909]: 2025-07-10T00:24:23.434434Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 10 00:24:23.461872 waagent[1909]: 2025-07-10T00:24:23.461848Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 10 00:24:23.461995 waagent[1909]: 2025-07-10T00:24:23.461975Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 10 00:24:23.467100 waagent[1909]: 2025-07-10T00:24:23.466937Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 10 00:24:23.472549 systemd[1]: Reload requested from client PID 1924 ('systemctl') (unit waagent.service)... Jul 10 00:24:23.472563 systemd[1]: Reloading... Jul 10 00:24:23.558134 zram_generator::config[1968]: No configuration found. Jul 10 00:24:23.624864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:24:23.718664 systemd[1]: Reloading finished in 245 ms. Jul 10 00:24:23.730094 waagent[1909]: 2025-07-10T00:24:23.728475Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 10 00:24:23.730094 waagent[1909]: 2025-07-10T00:24:23.728610Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 10 00:24:23.848612 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#129 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jul 10 00:24:24.240444 waagent[1909]: 2025-07-10T00:24:24.240379Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 10 00:24:24.240697 waagent[1909]: 2025-07-10T00:24:24.240674Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 10 00:24:24.241346 waagent[1909]: 2025-07-10T00:24:24.241299Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 10 00:24:24.241412 waagent[1909]: 2025-07-10T00:24:24.241379Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:24:24.241478 waagent[1909]: 2025-07-10T00:24:24.241445Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:24:24.241643 waagent[1909]: 2025-07-10T00:24:24.241622Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 10 00:24:24.242042 waagent[1909]: 2025-07-10T00:24:24.242014Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 10 00:24:24.242042 waagent[1909]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 10 00:24:24.242042 waagent[1909]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 10 00:24:24.242042 waagent[1909]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 10 00:24:24.242042 waagent[1909]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:24:24.242042 waagent[1909]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:24:24.242042 waagent[1909]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:24:24.242244 waagent[1909]: 2025-07-10T00:24:24.242112Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:24:24.242244 waagent[1909]: 2025-07-10T00:24:24.242170Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:24:24.242290 waagent[1909]: 2025-07-10T00:24:24.242271Z INFO EnvHandler ExtHandler Configure routes Jul 10 00:24:24.242327 waagent[1909]: 2025-07-10T00:24:24.242307Z INFO EnvHandler ExtHandler Gateway:None Jul 10 00:24:24.242428 waagent[1909]: 2025-07-10T00:24:24.242403Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 10 00:24:24.242691 waagent[1909]: 2025-07-10T00:24:24.242657Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 10 00:24:24.242803 waagent[1909]: 2025-07-10T00:24:24.242771Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 10 00:24:24.243216 waagent[1909]: 2025-07-10T00:24:24.243181Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 10 00:24:24.243268 waagent[1909]: 2025-07-10T00:24:24.243243Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 10 00:24:24.243583 waagent[1909]: 2025-07-10T00:24:24.243560Z INFO EnvHandler ExtHandler Routes:None Jul 10 00:24:24.244239 waagent[1909]: 2025-07-10T00:24:24.243892Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 10 00:24:24.273187 waagent[1909]: 2025-07-10T00:24:24.273138Z INFO MonitorHandler ExtHandler Network interfaces: Jul 10 00:24:24.273187 waagent[1909]: Executing ['ip', '-a', '-o', 'link']: Jul 10 00:24:24.273187 waagent[1909]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 10 00:24:24.273187 waagent[1909]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:45:a4:d9 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 10 00:24:24.273187 waagent[1909]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:45:a4:d9 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 10 00:24:24.273187 waagent[1909]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 10 00:24:24.273187 waagent[1909]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 10 00:24:24.273187 waagent[1909]: 2: eth0 inet 10.200.8.45/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 10 00:24:24.273187 waagent[1909]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 10 00:24:24.273187 waagent[1909]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 10 00:24:24.273187 waagent[1909]: 2: eth0 inet6 fe80::7eed:8dff:fe45:a4d9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 00:24:24.273187 waagent[1909]: 3: enP30832s1 inet6 fe80::7eed:8dff:fe45:a4d9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 00:24:24.309707 waagent[1909]: 2025-07-10T00:24:24.309664Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 10 00:24:24.309707 waagent[1909]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:24.309707 waagent[1909]: pkts bytes target prot opt in out source destination Jul 10 00:24:24.309707 waagent[1909]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:24.309707 waagent[1909]: pkts bytes target prot opt in out source destination Jul 10 00:24:24.309707 waagent[1909]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:24.309707 waagent[1909]: pkts bytes target prot opt in out source destination Jul 10 00:24:24.309707 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 00:24:24.309707 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 00:24:24.309707 waagent[1909]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 00:24:24.312376 waagent[1909]: 2025-07-10T00:24:24.312328Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 10 00:24:24.312376 waagent[1909]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:24.312376 waagent[1909]: pkts bytes target prot opt in out source destination Jul 10 00:24:24.312376 waagent[1909]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:24.312376 waagent[1909]: pkts bytes target prot opt in out source destination Jul 10 00:24:24.312376 waagent[1909]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:24.312376 waagent[1909]: pkts bytes target prot opt in out source destination Jul 10 00:24:24.312376 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 00:24:24.312376 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 00:24:24.312376 waagent[1909]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 00:24:24.349251 waagent[1909]: 2025-07-10T00:24:24.349221Z INFO ExtHandler ExtHandler Jul 10 00:24:24.349313 waagent[1909]: 2025-07-10T00:24:24.349280Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 40695c3f-65ea-4ed1-9ffa-9601fbef81e0 correlation 4df4407d-c731-461c-8710-533b9c0c3e82 created: 2025-07-10T00:22:52.936470Z] Jul 10 00:24:24.349560 waagent[1909]: 2025-07-10T00:24:24.349533Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 10 00:24:24.350110 waagent[1909]: 2025-07-10T00:24:24.350066Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 10 00:24:24.378005 waagent[1909]: 2025-07-10T00:24:24.377960Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 10 00:24:24.378005 waagent[1909]: Try `iptables -h' or 'iptables --help' for more information.) Jul 10 00:24:24.378332 waagent[1909]: 2025-07-10T00:24:24.378306Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 97086660-8A2F-46B3-9F29-75FDC82B3EA2;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 10 00:24:31.905317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:24:31.907076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:32.426187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:32.429287 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:32.459899 kubelet[2060]: E0710 00:24:32.459867 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:32.462580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:32.462703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:32.463011 systemd[1]: kubelet.service: Consumed 126ms CPU time, 108.3M memory peak. Jul 10 00:24:40.351369 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:24:40.352518 systemd[1]: Started sshd@0-10.200.8.45:22-10.200.16.10:41474.service - OpenSSH per-connection server daemon (10.200.16.10:41474). Jul 10 00:24:41.071908 sshd[2068]: Accepted publickey for core from 10.200.16.10 port 41474 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:41.073307 sshd-session[2068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:41.078020 systemd-logind[1699]: New session 3 of user core. Jul 10 00:24:41.084219 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:24:41.632359 systemd[1]: Started sshd@1-10.200.8.45:22-10.200.16.10:41486.service - OpenSSH per-connection server daemon (10.200.16.10:41486). Jul 10 00:24:42.262056 sshd[2073]: Accepted publickey for core from 10.200.16.10 port 41486 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:42.263415 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:42.267914 systemd-logind[1699]: New session 4 of user core. Jul 10 00:24:42.273212 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:24:42.607185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:24:42.608546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:42.705843 sshd[2075]: Connection closed by 10.200.16.10 port 41486 Jul 10 00:24:42.706305 sshd-session[2073]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:42.708649 systemd[1]: sshd@1-10.200.8.45:22-10.200.16.10:41486.service: Deactivated successfully. Jul 10 00:24:42.710578 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:24:42.713535 systemd-logind[1699]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:24:42.714310 systemd-logind[1699]: Removed session 4. Jul 10 00:24:42.834683 systemd[1]: Started sshd@2-10.200.8.45:22-10.200.16.10:41496.service - OpenSSH per-connection server daemon (10.200.16.10:41496). Jul 10 00:24:43.063953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:43.073340 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:43.103412 kubelet[2091]: E0710 00:24:43.103381 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:43.104999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:43.105153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:43.105452 systemd[1]: kubelet.service: Consumed 120ms CPU time, 110.5M memory peak. Jul 10 00:24:43.364161 chronyd[1737]: Selected source PHC0 Jul 10 00:24:43.463666 sshd[2084]: Accepted publickey for core from 10.200.16.10 port 41496 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:43.464956 sshd-session[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:43.469156 systemd-logind[1699]: New session 5 of user core. Jul 10 00:24:43.479195 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:24:43.902490 sshd[2098]: Connection closed by 10.200.16.10 port 41496 Jul 10 00:24:43.903071 sshd-session[2084]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:43.906791 systemd[1]: sshd@2-10.200.8.45:22-10.200.16.10:41496.service: Deactivated successfully. Jul 10 00:24:43.908357 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:24:43.909017 systemd-logind[1699]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:24:43.910177 systemd-logind[1699]: Removed session 5. Jul 10 00:24:44.024188 systemd[1]: Started sshd@3-10.200.8.45:22-10.200.16.10:41498.service - OpenSSH per-connection server daemon (10.200.16.10:41498). Jul 10 00:24:44.652184 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 41498 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:44.653481 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:44.658195 systemd-logind[1699]: New session 6 of user core. Jul 10 00:24:44.664218 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:24:45.094293 sshd[2106]: Connection closed by 10.200.16.10 port 41498 Jul 10 00:24:45.094897 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:45.098199 systemd[1]: sshd@3-10.200.8.45:22-10.200.16.10:41498.service: Deactivated successfully. Jul 10 00:24:45.099865 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:24:45.101806 systemd-logind[1699]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:24:45.102683 systemd-logind[1699]: Removed session 6. Jul 10 00:24:45.218483 systemd[1]: Started sshd@4-10.200.8.45:22-10.200.16.10:41500.service - OpenSSH per-connection server daemon (10.200.16.10:41500). Jul 10 00:24:45.848436 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 41500 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:45.849770 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:45.854413 systemd-logind[1699]: New session 7 of user core. Jul 10 00:24:45.860239 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:24:46.313681 sudo[2115]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:24:46.313935 sudo[2115]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:46.325071 sudo[2115]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:46.427792 sshd[2114]: Connection closed by 10.200.16.10 port 41500 Jul 10 00:24:46.428579 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:46.431964 systemd[1]: sshd@4-10.200.8.45:22-10.200.16.10:41500.service: Deactivated successfully. Jul 10 00:24:46.433465 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:24:46.435144 systemd-logind[1699]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:24:46.436004 systemd-logind[1699]: Removed session 7. Jul 10 00:24:46.538827 systemd[1]: Started sshd@5-10.200.8.45:22-10.200.16.10:41502.service - OpenSSH per-connection server daemon (10.200.16.10:41502). Jul 10 00:24:47.167965 sshd[2121]: Accepted publickey for core from 10.200.16.10 port 41502 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:47.169376 sshd-session[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:47.173913 systemd-logind[1699]: New session 8 of user core. Jul 10 00:24:47.181233 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:24:47.512172 sudo[2125]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:24:47.512559 sudo[2125]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:47.518824 sudo[2125]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:47.522742 sudo[2124]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:24:47.522969 sudo[2124]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:47.530489 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:24:47.559434 augenrules[2147]: No rules Jul 10 00:24:47.560302 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:24:47.560475 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:24:47.561322 sudo[2124]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:47.663710 sshd[2123]: Connection closed by 10.200.16.10 port 41502 Jul 10 00:24:47.664284 sshd-session[2121]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:47.667351 systemd[1]: sshd@5-10.200.8.45:22-10.200.16.10:41502.service: Deactivated successfully. Jul 10 00:24:47.668939 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:24:47.670140 systemd-logind[1699]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:24:47.671541 systemd-logind[1699]: Removed session 8. Jul 10 00:24:47.791354 systemd[1]: Started sshd@6-10.200.8.45:22-10.200.16.10:41518.service - OpenSSH per-connection server daemon (10.200.16.10:41518). Jul 10 00:24:48.421748 sshd[2156]: Accepted publickey for core from 10.200.16.10 port 41518 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:48.423048 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:48.427766 systemd-logind[1699]: New session 9 of user core. Jul 10 00:24:48.437226 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:24:48.764581 sudo[2159]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:24:48.764801 sudo[2159]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:49.659060 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:24:49.668432 (dockerd)[2176]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:24:50.277300 dockerd[2176]: time="2025-07-10T00:24:50.277027063Z" level=info msg="Starting up" Jul 10 00:24:50.278342 dockerd[2176]: time="2025-07-10T00:24:50.278316676Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:24:50.412664 dockerd[2176]: time="2025-07-10T00:24:50.412618674Z" level=info msg="Loading containers: start." Jul 10 00:24:50.434106 kernel: Initializing XFRM netlink socket Jul 10 00:24:50.640192 systemd-networkd[1357]: docker0: Link UP Jul 10 00:24:50.654666 dockerd[2176]: time="2025-07-10T00:24:50.654631979Z" level=info msg="Loading containers: done." Jul 10 00:24:50.668884 dockerd[2176]: time="2025-07-10T00:24:50.668853028Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:24:50.669003 dockerd[2176]: time="2025-07-10T00:24:50.668920136Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:24:50.669030 dockerd[2176]: time="2025-07-10T00:24:50.669010266Z" level=info msg="Initializing buildkit" Jul 10 00:24:50.704011 dockerd[2176]: time="2025-07-10T00:24:50.703958062Z" level=info msg="Completed buildkit initialization" Jul 10 00:24:50.710226 dockerd[2176]: time="2025-07-10T00:24:50.710196308Z" level=info msg="Daemon has completed initialization" Jul 10 00:24:50.710389 dockerd[2176]: time="2025-07-10T00:24:50.710241122Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:24:50.710479 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:24:52.119963 containerd[1724]: time="2025-07-10T00:24:52.119770223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:24:52.782993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734900816.mount: Deactivated successfully. Jul 10 00:24:53.155372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:24:53.158474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:53.879109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:53.887291 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:53.922101 kubelet[2422]: E0710 00:24:53.922045 2422 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:53.923541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:53.923660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:53.924024 systemd[1]: kubelet.service: Consumed 133ms CPU time, 108.8M memory peak. Jul 10 00:24:54.384504 containerd[1724]: time="2025-07-10T00:24:54.384458860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:54.386528 containerd[1724]: time="2025-07-10T00:24:54.386491899Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 10 00:24:54.388840 containerd[1724]: time="2025-07-10T00:24:54.388800614Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:54.392014 containerd[1724]: time="2025-07-10T00:24:54.391962369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:54.392612 containerd[1724]: time="2025-07-10T00:24:54.392475561Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.272671474s" Jul 10 00:24:54.392612 containerd[1724]: time="2025-07-10T00:24:54.392503859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 00:24:54.393110 containerd[1724]: time="2025-07-10T00:24:54.393094520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:24:55.538438 containerd[1724]: time="2025-07-10T00:24:55.538390709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:55.541158 containerd[1724]: time="2025-07-10T00:24:55.541122878Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 10 00:24:55.543577 containerd[1724]: time="2025-07-10T00:24:55.543545457Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:55.546883 containerd[1724]: time="2025-07-10T00:24:55.546849804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:55.547569 containerd[1724]: time="2025-07-10T00:24:55.547429026Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.154261692s" Jul 10 00:24:55.547569 containerd[1724]: time="2025-07-10T00:24:55.547461698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 00:24:55.548175 containerd[1724]: time="2025-07-10T00:24:55.548152211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:24:56.646197 containerd[1724]: time="2025-07-10T00:24:56.646147397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:56.648685 containerd[1724]: time="2025-07-10T00:24:56.648649575Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 10 00:24:56.650959 containerd[1724]: time="2025-07-10T00:24:56.650925283Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:56.654530 containerd[1724]: time="2025-07-10T00:24:56.654484640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:56.655089 containerd[1724]: time="2025-07-10T00:24:56.655060885Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.106883397s" Jul 10 00:24:56.655250 containerd[1724]: time="2025-07-10T00:24:56.655148055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 00:24:56.655610 containerd[1724]: time="2025-07-10T00:24:56.655596000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:24:57.722073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount133312407.mount: Deactivated successfully. Jul 10 00:24:58.074920 containerd[1724]: time="2025-07-10T00:24:58.074811024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:58.076976 containerd[1724]: time="2025-07-10T00:24:58.076944888Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 10 00:24:58.079666 containerd[1724]: time="2025-07-10T00:24:58.079624231Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:58.083410 containerd[1724]: time="2025-07-10T00:24:58.083368968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:58.083891 containerd[1724]: time="2025-07-10T00:24:58.083645969Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.427956636s" Jul 10 00:24:58.083891 containerd[1724]: time="2025-07-10T00:24:58.083678056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 00:24:58.084139 containerd[1724]: time="2025-07-10T00:24:58.084122682Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:24:58.865459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991305983.mount: Deactivated successfully. Jul 10 00:24:59.848578 containerd[1724]: time="2025-07-10T00:24:59.848536138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:59.850926 containerd[1724]: time="2025-07-10T00:24:59.850892034Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 10 00:24:59.853575 containerd[1724]: time="2025-07-10T00:24:59.853546236Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:59.857206 containerd[1724]: time="2025-07-10T00:24:59.857166103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:59.857879 containerd[1724]: time="2025-07-10T00:24:59.857734291Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.773584433s" Jul 10 00:24:59.857879 containerd[1724]: time="2025-07-10T00:24:59.857766652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:24:59.858206 containerd[1724]: time="2025-07-10T00:24:59.858188494Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:25:00.412348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445065865.mount: Deactivated successfully. Jul 10 00:25:00.446304 containerd[1724]: time="2025-07-10T00:25:00.446269612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:25:00.448906 containerd[1724]: time="2025-07-10T00:25:00.448872315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 10 00:25:00.451697 containerd[1724]: time="2025-07-10T00:25:00.451660688Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:25:00.455046 containerd[1724]: time="2025-07-10T00:25:00.454960677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:25:00.455451 containerd[1724]: time="2025-07-10T00:25:00.455326840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 597.109799ms" Jul 10 00:25:00.455451 containerd[1724]: time="2025-07-10T00:25:00.455358013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:25:00.455919 containerd[1724]: time="2025-07-10T00:25:00.455895565Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:25:01.017900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594359258.mount: Deactivated successfully. Jul 10 00:25:02.645659 containerd[1724]: time="2025-07-10T00:25:02.645609987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:02.647510 containerd[1724]: time="2025-07-10T00:25:02.647474035Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 10 00:25:02.650238 containerd[1724]: time="2025-07-10T00:25:02.650200000Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:02.653764 containerd[1724]: time="2025-07-10T00:25:02.653723113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:02.654472 containerd[1724]: time="2025-07-10T00:25:02.654361068Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.198440519s" Jul 10 00:25:02.654472 containerd[1724]: time="2025-07-10T00:25:02.654387043Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 00:25:04.155097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 00:25:04.159288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:04.642277 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:25:04.642359 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:25:04.642627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:04.642877 systemd[1]: kubelet.service: Consumed 86ms CPU time, 92.8M memory peak. Jul 10 00:25:04.644985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:04.665102 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 10 00:25:04.668687 systemd[1]: Reload requested from client PID 2602 ('systemctl') (unit session-9.scope)... Jul 10 00:25:04.668700 systemd[1]: Reloading... Jul 10 00:25:04.759152 zram_generator::config[2651]: No configuration found. Jul 10 00:25:04.888010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:25:04.979550 systemd[1]: Reloading finished in 310 ms. Jul 10 00:25:05.014446 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:25:05.014656 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:25:05.014917 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:05.014957 systemd[1]: kubelet.service: Consumed 79ms CPU time, 84M memory peak. Jul 10 00:25:05.016747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:05.033190 update_engine[1701]: I20250710 00:25:05.033142 1701 update_attempter.cc:509] Updating boot flags... Jul 10 00:25:05.676026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:05.681410 (kubelet)[2760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:25:05.714115 kubelet[2760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:25:05.714115 kubelet[2760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:25:05.714115 kubelet[2760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:25:05.714372 kubelet[2760]: I0710 00:25:05.714149 2760 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:25:06.054449 kubelet[2760]: I0710 00:25:06.054345 2760 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:25:06.054449 kubelet[2760]: I0710 00:25:06.054371 2760 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:25:06.054838 kubelet[2760]: I0710 00:25:06.054810 2760 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:25:06.085722 kubelet[2760]: E0710 00:25:06.085696 2760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:25:06.086406 kubelet[2760]: I0710 00:25:06.086386 2760 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:25:06.095159 kubelet[2760]: I0710 00:25:06.095059 2760 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:25:06.098848 kubelet[2760]: I0710 00:25:06.098820 2760 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:25:06.099445 kubelet[2760]: I0710 00:25:06.099428 2760 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:25:06.099591 kubelet[2760]: I0710 00:25:06.099555 2760 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:25:06.099736 kubelet[2760]: I0710 00:25:06.099591 2760 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-4eb7f9ac8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:25:06.099848 kubelet[2760]: I0710 00:25:06.099743 2760 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:25:06.099848 kubelet[2760]: I0710 00:25:06.099752 2760 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:25:06.099848 kubelet[2760]: I0710 00:25:06.099846 2760 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:25:06.102546 kubelet[2760]: I0710 00:25:06.102350 2760 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:25:06.102546 kubelet[2760]: I0710 00:25:06.102370 2760 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:25:06.102546 kubelet[2760]: I0710 00:25:06.102400 2760 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:25:06.102546 kubelet[2760]: I0710 00:25:06.102415 2760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:25:06.109378 kubelet[2760]: W0710 00:25:06.109234 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-4eb7f9ac8a&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 10 00:25:06.109378 kubelet[2760]: E0710 00:25:06.109282 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-4eb7f9ac8a&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:25:06.109378 kubelet[2760]: W0710 00:25:06.109339 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 10 00:25:06.109378 kubelet[2760]: E0710 00:25:06.109363 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:25:06.109565 kubelet[2760]: I0710 00:25:06.109557 2760 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:25:06.109892 kubelet[2760]: I0710 00:25:06.109881 2760 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:25:06.110437 kubelet[2760]: W0710 00:25:06.110425 2760 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:25:06.112251 kubelet[2760]: I0710 00:25:06.112169 2760 server.go:1274] "Started kubelet" Jul 10 00:25:06.113354 kubelet[2760]: I0710 00:25:06.113334 2760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:25:06.117315 kubelet[2760]: I0710 00:25:06.117288 2760 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:25:06.117893 kubelet[2760]: I0710 00:25:06.117870 2760 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:25:06.133647 kubelet[2760]: I0710 00:25:06.133595 2760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:25:06.133793 kubelet[2760]: I0710 00:25:06.133777 2760 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:25:06.135048 kubelet[2760]: I0710 00:25:06.133974 2760 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:25:06.135048 kubelet[2760]: E0710 00:25:06.132924 2760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.45:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-4eb7f9ac8a.1850bc2144408a7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-4eb7f9ac8a,UID:ci-4344.1.1-n-4eb7f9ac8a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-4eb7f9ac8a,},FirstTimestamp:2025-07-10 00:25:06.112146043 +0000 UTC m=+0.427520964,LastTimestamp:2025-07-10 00:25:06.112146043 +0000 UTC m=+0.427520964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-4eb7f9ac8a,}" Jul 10 00:25:06.135190 kubelet[2760]: I0710 00:25:06.135142 2760 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:25:06.135318 kubelet[2760]: E0710 00:25:06.135301 2760 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-4eb7f9ac8a\" not found" Jul 10 00:25:06.137055 kubelet[2760]: E0710 00:25:06.137026 2760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4eb7f9ac8a?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="200ms" Jul 10 00:25:06.137704 kubelet[2760]: I0710 00:25:06.137689 2760 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:25:06.138012 kubelet[2760]: I0710 00:25:06.137997 2760 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:25:06.138242 kubelet[2760]: I0710 00:25:06.138228 2760 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:25:06.138292 kubelet[2760]: I0710 00:25:06.138275 2760 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:25:06.140098 kubelet[2760]: W0710 00:25:06.139796 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 10 00:25:06.140235 kubelet[2760]: E0710 00:25:06.140213 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:25:06.140367 kubelet[2760]: I0710 00:25:06.140355 2760 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:25:06.150971 kubelet[2760]: I0710 00:25:06.150939 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:25:06.155516 kubelet[2760]: I0710 00:25:06.155072 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:25:06.155516 kubelet[2760]: I0710 00:25:06.155116 2760 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:25:06.155516 kubelet[2760]: I0710 00:25:06.155129 2760 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:25:06.155516 kubelet[2760]: E0710 00:25:06.155155 2760 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:25:06.158200 kubelet[2760]: W0710 00:25:06.158182 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 10 00:25:06.158261 kubelet[2760]: E0710 00:25:06.158209 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:25:06.158545 kubelet[2760]: I0710 00:25:06.158524 2760 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:25:06.158545 kubelet[2760]: I0710 00:25:06.158534 2760 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:25:06.158545 kubelet[2760]: I0710 00:25:06.158546 2760 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:25:06.163064 kubelet[2760]: I0710 00:25:06.163045 2760 policy_none.go:49] "None policy: Start" Jul 10 00:25:06.163593 kubelet[2760]: I0710 00:25:06.163578 2760 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:25:06.163652 kubelet[2760]: I0710 00:25:06.163597 2760 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:25:06.172204 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:25:06.183969 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:25:06.186628 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:25:06.197532 kubelet[2760]: I0710 00:25:06.197518 2760 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:25:06.197940 kubelet[2760]: I0710 00:25:06.197932 2760 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:25:06.198013 kubelet[2760]: I0710 00:25:06.197991 2760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:25:06.198190 kubelet[2760]: I0710 00:25:06.198181 2760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:25:06.199337 kubelet[2760]: E0710 00:25:06.199302 2760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-4eb7f9ac8a\" not found" Jul 10 00:25:06.263345 systemd[1]: Created slice kubepods-burstable-podc6986494512021782996f05ffb85dd0b.slice - libcontainer container kubepods-burstable-podc6986494512021782996f05ffb85dd0b.slice. Jul 10 00:25:06.273567 systemd[1]: Created slice kubepods-burstable-pod74bfb62d18c828f498e62d28ceb7e06c.slice - libcontainer container kubepods-burstable-pod74bfb62d18c828f498e62d28ceb7e06c.slice. Jul 10 00:25:06.276494 systemd[1]: Created slice kubepods-burstable-pod63bcf86ac1f32c56363bba5d6654afef.slice - libcontainer container kubepods-burstable-pod63bcf86ac1f32c56363bba5d6654afef.slice. Jul 10 00:25:06.300111 kubelet[2760]: I0710 00:25:06.300050 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.300367 kubelet[2760]: E0710 00:25:06.300349 2760 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.337838 kubelet[2760]: E0710 00:25:06.337758 2760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4eb7f9ac8a?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="400ms" Jul 10 00:25:06.339040 kubelet[2760]: I0710 00:25:06.338979 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6986494512021782996f05ffb85dd0b-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"c6986494512021782996f05ffb85dd0b\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.339255 kubelet[2760]: I0710 00:25:06.339240 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74bfb62d18c828f498e62d28ceb7e06c-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"74bfb62d18c828f498e62d28ceb7e06c\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.339304 kubelet[2760]: I0710 00:25:06.339268 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74bfb62d18c828f498e62d28ceb7e06c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"74bfb62d18c828f498e62d28ceb7e06c\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.339304 kubelet[2760]: I0710 00:25:06.339290 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.339415 kubelet[2760]: I0710 00:25:06.339309 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74bfb62d18c828f498e62d28ceb7e06c-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"74bfb62d18c828f498e62d28ceb7e06c\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.339415 kubelet[2760]: I0710 00:25:06.339327 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.339415 kubelet[2760]: I0710 00:25:06.339350 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.339415 kubelet[2760]: I0710 00:25:06.339376 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.339415 kubelet[2760]: I0710 00:25:06.339407 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.502217 kubelet[2760]: I0710 00:25:06.502183 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.502580 kubelet[2760]: E0710 00:25:06.502547 2760 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.572770 containerd[1724]: time="2025-07-10T00:25:06.572720412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-4eb7f9ac8a,Uid:c6986494512021782996f05ffb85dd0b,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:06.576672 containerd[1724]: time="2025-07-10T00:25:06.576633931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a,Uid:74bfb62d18c828f498e62d28ceb7e06c,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:06.579293 containerd[1724]: time="2025-07-10T00:25:06.579186828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a,Uid:63bcf86ac1f32c56363bba5d6654afef,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:06.665103 containerd[1724]: time="2025-07-10T00:25:06.664870274Z" level=info msg="connecting to shim c12dcdf0eefba440fd5b24245ee6da2adab38d1d281ef70bf44be6eb6ee7439b" address="unix:///run/containerd/s/c761a40f6fff19fb51c3a5a73f7305e643f29ae7eda13dd55d4611a372225a99" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:06.668768 containerd[1724]: time="2025-07-10T00:25:06.668735142Z" level=info msg="connecting to shim d5224fdb296b97de64ba0abd89326f156125091cdff4985e08b16fef24466f23" address="unix:///run/containerd/s/206bc4f01da2fdc36473523a72f35e9046537e62a739937e51e5f2a1e4091c69" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:06.675993 containerd[1724]: time="2025-07-10T00:25:06.675965079Z" level=info msg="connecting to shim 51440131e05e78d1ad6c26f459db07401cae48429c8f7a367099dc0d89fb2435" address="unix:///run/containerd/s/a9f89e76a3afb5777c41ffd8e5f5cf03fe00e7a12aa95c468cc57451a10f3970" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:06.706239 systemd[1]: Started cri-containerd-c12dcdf0eefba440fd5b24245ee6da2adab38d1d281ef70bf44be6eb6ee7439b.scope - libcontainer container c12dcdf0eefba440fd5b24245ee6da2adab38d1d281ef70bf44be6eb6ee7439b. Jul 10 00:25:06.707669 systemd[1]: Started cri-containerd-d5224fdb296b97de64ba0abd89326f156125091cdff4985e08b16fef24466f23.scope - libcontainer container d5224fdb296b97de64ba0abd89326f156125091cdff4985e08b16fef24466f23. Jul 10 00:25:06.712197 systemd[1]: Started cri-containerd-51440131e05e78d1ad6c26f459db07401cae48429c8f7a367099dc0d89fb2435.scope - libcontainer container 51440131e05e78d1ad6c26f459db07401cae48429c8f7a367099dc0d89fb2435. Jul 10 00:25:06.738817 kubelet[2760]: E0710 00:25:06.738787 2760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4eb7f9ac8a?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="800ms" Jul 10 00:25:06.772339 containerd[1724]: time="2025-07-10T00:25:06.771248715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-4eb7f9ac8a,Uid:c6986494512021782996f05ffb85dd0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5224fdb296b97de64ba0abd89326f156125091cdff4985e08b16fef24466f23\"" Jul 10 00:25:06.782941 containerd[1724]: time="2025-07-10T00:25:06.782857138Z" level=info msg="CreateContainer within sandbox \"d5224fdb296b97de64ba0abd89326f156125091cdff4985e08b16fef24466f23\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:25:06.785538 containerd[1724]: time="2025-07-10T00:25:06.785493721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a,Uid:74bfb62d18c828f498e62d28ceb7e06c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c12dcdf0eefba440fd5b24245ee6da2adab38d1d281ef70bf44be6eb6ee7439b\"" Jul 10 00:25:06.789920 containerd[1724]: time="2025-07-10T00:25:06.789611495Z" level=info msg="CreateContainer within sandbox \"c12dcdf0eefba440fd5b24245ee6da2adab38d1d281ef70bf44be6eb6ee7439b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:25:06.797567 containerd[1724]: time="2025-07-10T00:25:06.797548355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a,Uid:63bcf86ac1f32c56363bba5d6654afef,Namespace:kube-system,Attempt:0,} returns sandbox id \"51440131e05e78d1ad6c26f459db07401cae48429c8f7a367099dc0d89fb2435\"" Jul 10 00:25:06.799042 containerd[1724]: time="2025-07-10T00:25:06.799020707Z" level=info msg="CreateContainer within sandbox \"51440131e05e78d1ad6c26f459db07401cae48429c8f7a367099dc0d89fb2435\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:25:06.821116 containerd[1724]: time="2025-07-10T00:25:06.821093310Z" level=info msg="Container 25bb96a20fca775c2e7b33be1589e2e7d1ceadd38d11520d24e06d72f6c2ef64: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:06.826193 containerd[1724]: time="2025-07-10T00:25:06.826169172Z" level=info msg="Container 3b1508abeb8b44b3c78b16753b745c6f131a4d975c507f240a272a81b34efbc1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:06.844497 containerd[1724]: time="2025-07-10T00:25:06.844472993Z" level=info msg="CreateContainer within sandbox \"c12dcdf0eefba440fd5b24245ee6da2adab38d1d281ef70bf44be6eb6ee7439b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"25bb96a20fca775c2e7b33be1589e2e7d1ceadd38d11520d24e06d72f6c2ef64\"" Jul 10 00:25:06.844925 containerd[1724]: time="2025-07-10T00:25:06.844907033Z" level=info msg="StartContainer for \"25bb96a20fca775c2e7b33be1589e2e7d1ceadd38d11520d24e06d72f6c2ef64\"" Jul 10 00:25:06.845652 containerd[1724]: time="2025-07-10T00:25:06.845624891Z" level=info msg="connecting to shim 25bb96a20fca775c2e7b33be1589e2e7d1ceadd38d11520d24e06d72f6c2ef64" address="unix:///run/containerd/s/c761a40f6fff19fb51c3a5a73f7305e643f29ae7eda13dd55d4611a372225a99" protocol=ttrpc version=3 Jul 10 00:25:06.848104 containerd[1724]: time="2025-07-10T00:25:06.847906154Z" level=info msg="Container 9e5371c7ea2ad4337663214b46de321bbe4993683a33cf5c49304b52b9f16c0f: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:06.858361 containerd[1724]: time="2025-07-10T00:25:06.858341657Z" level=info msg="CreateContainer within sandbox \"d5224fdb296b97de64ba0abd89326f156125091cdff4985e08b16fef24466f23\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3b1508abeb8b44b3c78b16753b745c6f131a4d975c507f240a272a81b34efbc1\"" Jul 10 00:25:06.858747 containerd[1724]: time="2025-07-10T00:25:06.858731019Z" level=info msg="StartContainer for \"3b1508abeb8b44b3c78b16753b745c6f131a4d975c507f240a272a81b34efbc1\"" Jul 10 00:25:06.859641 containerd[1724]: time="2025-07-10T00:25:06.859621241Z" level=info msg="connecting to shim 3b1508abeb8b44b3c78b16753b745c6f131a4d975c507f240a272a81b34efbc1" address="unix:///run/containerd/s/206bc4f01da2fdc36473523a72f35e9046537e62a739937e51e5f2a1e4091c69" protocol=ttrpc version=3 Jul 10 00:25:06.860235 systemd[1]: Started cri-containerd-25bb96a20fca775c2e7b33be1589e2e7d1ceadd38d11520d24e06d72f6c2ef64.scope - libcontainer container 25bb96a20fca775c2e7b33be1589e2e7d1ceadd38d11520d24e06d72f6c2ef64. Jul 10 00:25:06.868906 containerd[1724]: time="2025-07-10T00:25:06.868858322Z" level=info msg="CreateContainer within sandbox \"51440131e05e78d1ad6c26f459db07401cae48429c8f7a367099dc0d89fb2435\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9e5371c7ea2ad4337663214b46de321bbe4993683a33cf5c49304b52b9f16c0f\"" Jul 10 00:25:06.869500 containerd[1724]: time="2025-07-10T00:25:06.869322560Z" level=info msg="StartContainer for \"9e5371c7ea2ad4337663214b46de321bbe4993683a33cf5c49304b52b9f16c0f\"" Jul 10 00:25:06.873566 containerd[1724]: time="2025-07-10T00:25:06.873539341Z" level=info msg="connecting to shim 9e5371c7ea2ad4337663214b46de321bbe4993683a33cf5c49304b52b9f16c0f" address="unix:///run/containerd/s/a9f89e76a3afb5777c41ffd8e5f5cf03fe00e7a12aa95c468cc57451a10f3970" protocol=ttrpc version=3 Jul 10 00:25:06.879430 systemd[1]: Started cri-containerd-3b1508abeb8b44b3c78b16753b745c6f131a4d975c507f240a272a81b34efbc1.scope - libcontainer container 3b1508abeb8b44b3c78b16753b745c6f131a4d975c507f240a272a81b34efbc1. Jul 10 00:25:06.891248 systemd[1]: Started cri-containerd-9e5371c7ea2ad4337663214b46de321bbe4993683a33cf5c49304b52b9f16c0f.scope - libcontainer container 9e5371c7ea2ad4337663214b46de321bbe4993683a33cf5c49304b52b9f16c0f. Jul 10 00:25:06.907777 kubelet[2760]: I0710 00:25:06.907489 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.907777 kubelet[2760]: E0710 00:25:06.907757 2760 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:06.943258 containerd[1724]: time="2025-07-10T00:25:06.943196209Z" level=info msg="StartContainer for \"25bb96a20fca775c2e7b33be1589e2e7d1ceadd38d11520d24e06d72f6c2ef64\" returns successfully" Jul 10 00:25:06.965173 containerd[1724]: time="2025-07-10T00:25:06.965052641Z" level=info msg="StartContainer for \"3b1508abeb8b44b3c78b16753b745c6f131a4d975c507f240a272a81b34efbc1\" returns successfully" Jul 10 00:25:06.975462 containerd[1724]: time="2025-07-10T00:25:06.975398170Z" level=info msg="StartContainer for \"9e5371c7ea2ad4337663214b46de321bbe4993683a33cf5c49304b52b9f16c0f\" returns successfully" Jul 10 00:25:06.998201 kubelet[2760]: W0710 00:25:06.998115 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 10 00:25:06.998201 kubelet[2760]: E0710 00:25:06.998180 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:25:07.710100 kubelet[2760]: I0710 00:25:07.709843 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:08.685270 kubelet[2760]: E0710 00:25:08.685220 2760 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-4eb7f9ac8a\" not found" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:08.779366 kubelet[2760]: I0710 00:25:08.779335 2760 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:09.111371 kubelet[2760]: I0710 00:25:09.111259 2760 apiserver.go:52] "Watching apiserver" Jul 10 00:25:09.138798 kubelet[2760]: I0710 00:25:09.138763 2760 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:25:09.175610 kubelet[2760]: E0710 00:25:09.175580 2760 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:09.917161 kubelet[2760]: W0710 00:25:09.916926 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:25:10.623754 systemd[1]: Reload requested from client PID 3027 ('systemctl') (unit session-9.scope)... Jul 10 00:25:10.623768 systemd[1]: Reloading... Jul 10 00:25:10.706108 zram_generator::config[3073]: No configuration found. Jul 10 00:25:10.784965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:25:10.882518 systemd[1]: Reloading finished in 258 ms. Jul 10 00:25:10.902126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:10.915869 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:25:10.916074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:10.916137 systemd[1]: kubelet.service: Consumed 706ms CPU time, 128.4M memory peak. Jul 10 00:25:10.917585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:11.387174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:11.395349 (kubelet)[3140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:25:11.437322 kubelet[3140]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:25:11.437322 kubelet[3140]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:25:11.437322 kubelet[3140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:25:11.437584 kubelet[3140]: I0710 00:25:11.437428 3140 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:25:11.444097 kubelet[3140]: I0710 00:25:11.443718 3140 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:25:11.444097 kubelet[3140]: I0710 00:25:11.443734 3140 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:25:11.444097 kubelet[3140]: I0710 00:25:11.443919 3140 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:25:11.445899 kubelet[3140]: I0710 00:25:11.445883 3140 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:25:11.448838 kubelet[3140]: I0710 00:25:11.448820 3140 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:25:11.453287 kubelet[3140]: I0710 00:25:11.453275 3140 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:25:11.455461 kubelet[3140]: I0710 00:25:11.455442 3140 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:25:11.457098 kubelet[3140]: I0710 00:25:11.455709 3140 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:25:11.457098 kubelet[3140]: I0710 00:25:11.455807 3140 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:25:11.457098 kubelet[3140]: I0710 00:25:11.455835 3140 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-4eb7f9ac8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:25:11.457098 kubelet[3140]: I0710 00:25:11.456239 3140 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:25:11.457309 kubelet[3140]: I0710 00:25:11.456255 3140 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:25:11.457309 kubelet[3140]: I0710 00:25:11.456281 3140 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:25:11.457309 kubelet[3140]: I0710 00:25:11.456358 3140 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:25:11.457309 kubelet[3140]: I0710 00:25:11.456374 3140 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:25:11.457309 kubelet[3140]: I0710 00:25:11.456401 3140 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:25:11.457309 kubelet[3140]: I0710 00:25:11.456411 3140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:25:11.461288 kubelet[3140]: I0710 00:25:11.461273 3140 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:25:11.461816 kubelet[3140]: I0710 00:25:11.461801 3140 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:25:11.462280 kubelet[3140]: I0710 00:25:11.462270 3140 server.go:1274] "Started kubelet" Jul 10 00:25:11.465426 kubelet[3140]: I0710 00:25:11.465413 3140 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:25:11.468511 kubelet[3140]: I0710 00:25:11.468479 3140 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:25:11.468673 kubelet[3140]: I0710 00:25:11.468661 3140 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:25:11.475994 kubelet[3140]: I0710 00:25:11.475957 3140 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:25:11.476863 kubelet[3140]: I0710 00:25:11.476852 3140 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:25:11.477738 kubelet[3140]: I0710 00:25:11.477723 3140 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:25:11.478971 kubelet[3140]: I0710 00:25:11.478961 3140 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:25:11.479271 kubelet[3140]: E0710 00:25:11.479259 3140 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-4eb7f9ac8a\" not found" Jul 10 00:25:11.480689 kubelet[3140]: I0710 00:25:11.480673 3140 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:25:11.480845 kubelet[3140]: I0710 00:25:11.480837 3140 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:25:11.482817 kubelet[3140]: I0710 00:25:11.482798 3140 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:25:11.482894 kubelet[3140]: I0710 00:25:11.482877 3140 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:25:11.483466 kubelet[3140]: E0710 00:25:11.483445 3140 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:25:11.483839 kubelet[3140]: I0710 00:25:11.483817 3140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:25:11.485621 kubelet[3140]: I0710 00:25:11.485607 3140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:25:11.485699 kubelet[3140]: I0710 00:25:11.485694 3140 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:25:11.485807 kubelet[3140]: I0710 00:25:11.485741 3140 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:25:11.485892 kubelet[3140]: E0710 00:25:11.485881 3140 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:25:11.485928 kubelet[3140]: I0710 00:25:11.485798 3140 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:25:11.527879 kubelet[3140]: I0710 00:25:11.527868 3140 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:25:11.527934 kubelet[3140]: I0710 00:25:11.527891 3140 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:25:11.527934 kubelet[3140]: I0710 00:25:11.527905 3140 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:25:11.528024 kubelet[3140]: I0710 00:25:11.528012 3140 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:25:11.528050 kubelet[3140]: I0710 00:25:11.528022 3140 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:25:11.528050 kubelet[3140]: I0710 00:25:11.528037 3140 policy_none.go:49] "None policy: Start" Jul 10 00:25:11.528557 kubelet[3140]: I0710 00:25:11.528545 3140 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:25:11.528605 kubelet[3140]: I0710 00:25:11.528562 3140 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:25:11.528694 kubelet[3140]: I0710 00:25:11.528686 3140 state_mem.go:75] "Updated machine memory state" Jul 10 00:25:11.532056 kubelet[3140]: I0710 00:25:11.531822 3140 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:25:11.532056 kubelet[3140]: I0710 00:25:11.531940 3140 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:25:11.532056 kubelet[3140]: I0710 00:25:11.531949 3140 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:25:11.532188 kubelet[3140]: I0710 00:25:11.532161 3140 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:25:11.592205 kubelet[3140]: W0710 00:25:11.592188 3140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:25:11.596801 kubelet[3140]: W0710 00:25:11.596781 3140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:25:11.597183 kubelet[3140]: W0710 00:25:11.597171 3140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:25:11.597244 kubelet[3140]: E0710 00:25:11.597229 3140 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.634297 kubelet[3140]: I0710 00:25:11.634285 3140 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.646132 kubelet[3140]: I0710 00:25:11.644917 3140 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.646132 kubelet[3140]: I0710 00:25:11.644961 3140 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682676 kubelet[3140]: I0710 00:25:11.682460 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74bfb62d18c828f498e62d28ceb7e06c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"74bfb62d18c828f498e62d28ceb7e06c\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682676 kubelet[3140]: I0710 00:25:11.682491 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682676 kubelet[3140]: I0710 00:25:11.682512 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682676 kubelet[3140]: I0710 00:25:11.682530 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682676 kubelet[3140]: I0710 00:25:11.682549 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6986494512021782996f05ffb85dd0b-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"c6986494512021782996f05ffb85dd0b\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682864 kubelet[3140]: I0710 00:25:11.682565 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74bfb62d18c828f498e62d28ceb7e06c-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"74bfb62d18c828f498e62d28ceb7e06c\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682864 kubelet[3140]: I0710 00:25:11.682581 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74bfb62d18c828f498e62d28ceb7e06c-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"74bfb62d18c828f498e62d28ceb7e06c\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682864 kubelet[3140]: I0710 00:25:11.682597 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:11.682864 kubelet[3140]: I0710 00:25:11.682615 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63bcf86ac1f32c56363bba5d6654afef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a\" (UID: \"63bcf86ac1f32c56363bba5d6654afef\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:12.465165 kubelet[3140]: I0710 00:25:12.465121 3140 apiserver.go:52] "Watching apiserver" Jul 10 00:25:12.481514 kubelet[3140]: I0710 00:25:12.481489 3140 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:25:12.542651 kubelet[3140]: W0710 00:25:12.542512 3140 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:25:12.543459 kubelet[3140]: E0710 00:25:12.543311 3140 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:25:12.555043 kubelet[3140]: I0710 00:25:12.554823 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4eb7f9ac8a" podStartSLOduration=1.554790422 podStartE2EDuration="1.554790422s" podCreationTimestamp="2025-07-10 00:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:12.554766998 +0000 UTC m=+1.155537413" watchObservedRunningTime="2025-07-10 00:25:12.554790422 +0000 UTC m=+1.155560837" Jul 10 00:25:12.555043 kubelet[3140]: I0710 00:25:12.554954 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4eb7f9ac8a" podStartSLOduration=1.554946567 podStartE2EDuration="1.554946567s" podCreationTimestamp="2025-07-10 00:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:12.543733931 +0000 UTC m=+1.144504348" watchObservedRunningTime="2025-07-10 00:25:12.554946567 +0000 UTC m=+1.155716971" Jul 10 00:25:12.570677 kubelet[3140]: I0710 00:25:12.570630 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4eb7f9ac8a" podStartSLOduration=3.570618331 podStartE2EDuration="3.570618331s" podCreationTimestamp="2025-07-10 00:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:12.562852548 +0000 UTC m=+1.163622962" watchObservedRunningTime="2025-07-10 00:25:12.570618331 +0000 UTC m=+1.171388741" Jul 10 00:25:17.575661 kubelet[3140]: I0710 00:25:17.575613 3140 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:25:17.576288 kubelet[3140]: I0710 00:25:17.576104 3140 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:25:17.576325 containerd[1724]: time="2025-07-10T00:25:17.575924083Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:25:18.459135 systemd[1]: Created slice kubepods-besteffort-poddb2562d2_8c6b_40b1_9057_12abeeb1635e.slice - libcontainer container kubepods-besteffort-poddb2562d2_8c6b_40b1_9057_12abeeb1635e.slice. Jul 10 00:25:18.526447 kubelet[3140]: I0710 00:25:18.526308 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db2562d2-8c6b-40b1-9057-12abeeb1635e-xtables-lock\") pod \"kube-proxy-mcv47\" (UID: \"db2562d2-8c6b-40b1-9057-12abeeb1635e\") " pod="kube-system/kube-proxy-mcv47" Jul 10 00:25:18.526447 kubelet[3140]: I0710 00:25:18.526353 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db2562d2-8c6b-40b1-9057-12abeeb1635e-kube-proxy\") pod \"kube-proxy-mcv47\" (UID: \"db2562d2-8c6b-40b1-9057-12abeeb1635e\") " pod="kube-system/kube-proxy-mcv47" Jul 10 00:25:18.526447 kubelet[3140]: I0710 00:25:18.526372 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db2562d2-8c6b-40b1-9057-12abeeb1635e-lib-modules\") pod \"kube-proxy-mcv47\" (UID: \"db2562d2-8c6b-40b1-9057-12abeeb1635e\") " pod="kube-system/kube-proxy-mcv47" Jul 10 00:25:18.526447 kubelet[3140]: I0710 00:25:18.526389 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkn6w\" (UniqueName: \"kubernetes.io/projected/db2562d2-8c6b-40b1-9057-12abeeb1635e-kube-api-access-kkn6w\") pod \"kube-proxy-mcv47\" (UID: \"db2562d2-8c6b-40b1-9057-12abeeb1635e\") " pod="kube-system/kube-proxy-mcv47" Jul 10 00:25:18.682798 systemd[1]: Created slice kubepods-besteffort-pod99933fb7_d7a3_430d_b918_c988d266a489.slice - libcontainer container kubepods-besteffort-pod99933fb7_d7a3_430d_b918_c988d266a489.slice. Jul 10 00:25:18.727574 kubelet[3140]: I0710 00:25:18.727488 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99933fb7-d7a3-430d-b918-c988d266a489-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-vqxc4\" (UID: \"99933fb7-d7a3-430d-b918-c988d266a489\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-vqxc4" Jul 10 00:25:18.727574 kubelet[3140]: I0710 00:25:18.727521 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpc7z\" (UniqueName: \"kubernetes.io/projected/99933fb7-d7a3-430d-b918-c988d266a489-kube-api-access-jpc7z\") pod \"tigera-operator-5bf8dfcb4-vqxc4\" (UID: \"99933fb7-d7a3-430d-b918-c988d266a489\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-vqxc4" Jul 10 00:25:18.766342 containerd[1724]: time="2025-07-10T00:25:18.766314523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcv47,Uid:db2562d2-8c6b-40b1-9057-12abeeb1635e,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:18.798638 containerd[1724]: time="2025-07-10T00:25:18.798600608Z" level=info msg="connecting to shim 4947639fe75306bdeb0e1917b7a9b66e7ec75c19e06ad5acd9947e836607cba9" address="unix:///run/containerd/s/8c233a8727cef4172db90496eda3f0ab361077e7cd010f735aac3fdb07838382" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:18.824278 systemd[1]: Started cri-containerd-4947639fe75306bdeb0e1917b7a9b66e7ec75c19e06ad5acd9947e836607cba9.scope - libcontainer container 4947639fe75306bdeb0e1917b7a9b66e7ec75c19e06ad5acd9947e836607cba9. Jul 10 00:25:18.850585 containerd[1724]: time="2025-07-10T00:25:18.850560380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcv47,Uid:db2562d2-8c6b-40b1-9057-12abeeb1635e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4947639fe75306bdeb0e1917b7a9b66e7ec75c19e06ad5acd9947e836607cba9\"" Jul 10 00:25:18.852988 containerd[1724]: time="2025-07-10T00:25:18.852961381Z" level=info msg="CreateContainer within sandbox \"4947639fe75306bdeb0e1917b7a9b66e7ec75c19e06ad5acd9947e836607cba9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:25:18.882573 containerd[1724]: time="2025-07-10T00:25:18.882548138Z" level=info msg="Container 2af2da2704386ed35151713d740f0db786fa4c338e029e74abe5d8d28c3a559a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:18.916017 containerd[1724]: time="2025-07-10T00:25:18.915936942Z" level=info msg="CreateContainer within sandbox \"4947639fe75306bdeb0e1917b7a9b66e7ec75c19e06ad5acd9947e836607cba9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2af2da2704386ed35151713d740f0db786fa4c338e029e74abe5d8d28c3a559a\"" Jul 10 00:25:18.919774 containerd[1724]: time="2025-07-10T00:25:18.919614993Z" level=info msg="StartContainer for \"2af2da2704386ed35151713d740f0db786fa4c338e029e74abe5d8d28c3a559a\"" Jul 10 00:25:18.921477 containerd[1724]: time="2025-07-10T00:25:18.921437721Z" level=info msg="connecting to shim 2af2da2704386ed35151713d740f0db786fa4c338e029e74abe5d8d28c3a559a" address="unix:///run/containerd/s/8c233a8727cef4172db90496eda3f0ab361077e7cd010f735aac3fdb07838382" protocol=ttrpc version=3 Jul 10 00:25:18.939241 systemd[1]: Started cri-containerd-2af2da2704386ed35151713d740f0db786fa4c338e029e74abe5d8d28c3a559a.scope - libcontainer container 2af2da2704386ed35151713d740f0db786fa4c338e029e74abe5d8d28c3a559a. Jul 10 00:25:18.970928 containerd[1724]: time="2025-07-10T00:25:18.970734950Z" level=info msg="StartContainer for \"2af2da2704386ed35151713d740f0db786fa4c338e029e74abe5d8d28c3a559a\" returns successfully" Jul 10 00:25:18.986949 containerd[1724]: time="2025-07-10T00:25:18.986885271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-vqxc4,Uid:99933fb7-d7a3-430d-b918-c988d266a489,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:25:19.021595 containerd[1724]: time="2025-07-10T00:25:19.021566062Z" level=info msg="connecting to shim 7c18fa90a39e23314f5455a22cab90f7bbf70bf2d6cc7f3a7981a32fa8c76241" address="unix:///run/containerd/s/142120f463a8947ca332fa7a3a6a47faaaf7da56ef0f991fc4513f7b74664acf" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:19.044388 systemd[1]: Started cri-containerd-7c18fa90a39e23314f5455a22cab90f7bbf70bf2d6cc7f3a7981a32fa8c76241.scope - libcontainer container 7c18fa90a39e23314f5455a22cab90f7bbf70bf2d6cc7f3a7981a32fa8c76241. Jul 10 00:25:19.088346 containerd[1724]: time="2025-07-10T00:25:19.088322413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-vqxc4,Uid:99933fb7-d7a3-430d-b918-c988d266a489,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7c18fa90a39e23314f5455a22cab90f7bbf70bf2d6cc7f3a7981a32fa8c76241\"" Jul 10 00:25:19.090024 containerd[1724]: time="2025-07-10T00:25:19.089999747Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:25:19.540471 kubelet[3140]: I0710 00:25:19.540357 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mcv47" podStartSLOduration=1.540340329 podStartE2EDuration="1.540340329s" podCreationTimestamp="2025-07-10 00:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:19.539875563 +0000 UTC m=+8.140645977" watchObservedRunningTime="2025-07-10 00:25:19.540340329 +0000 UTC m=+8.141110746" Jul 10 00:25:19.639618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4448683.mount: Deactivated successfully. Jul 10 00:25:20.638005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601924280.mount: Deactivated successfully. Jul 10 00:25:21.063860 containerd[1724]: time="2025-07-10T00:25:21.063820676Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:21.067922 containerd[1724]: time="2025-07-10T00:25:21.067885187Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 00:25:21.070547 containerd[1724]: time="2025-07-10T00:25:21.070507726Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:21.073629 containerd[1724]: time="2025-07-10T00:25:21.073585837Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:21.074211 containerd[1724]: time="2025-07-10T00:25:21.074015657Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.983982892s" Jul 10 00:25:21.074211 containerd[1724]: time="2025-07-10T00:25:21.074042971Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 00:25:21.076026 containerd[1724]: time="2025-07-10T00:25:21.076000283Z" level=info msg="CreateContainer within sandbox \"7c18fa90a39e23314f5455a22cab90f7bbf70bf2d6cc7f3a7981a32fa8c76241\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:25:21.105074 containerd[1724]: time="2025-07-10T00:25:21.105049494Z" level=info msg="Container f72254977140b14b806d3679f09877d6cde5d059a1c4c5a453a0d77eee0c4e46: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:21.117795 containerd[1724]: time="2025-07-10T00:25:21.117770417Z" level=info msg="CreateContainer within sandbox \"7c18fa90a39e23314f5455a22cab90f7bbf70bf2d6cc7f3a7981a32fa8c76241\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f72254977140b14b806d3679f09877d6cde5d059a1c4c5a453a0d77eee0c4e46\"" Jul 10 00:25:21.118134 containerd[1724]: time="2025-07-10T00:25:21.118093575Z" level=info msg="StartContainer for \"f72254977140b14b806d3679f09877d6cde5d059a1c4c5a453a0d77eee0c4e46\"" Jul 10 00:25:21.118931 containerd[1724]: time="2025-07-10T00:25:21.118896175Z" level=info msg="connecting to shim f72254977140b14b806d3679f09877d6cde5d059a1c4c5a453a0d77eee0c4e46" address="unix:///run/containerd/s/142120f463a8947ca332fa7a3a6a47faaaf7da56ef0f991fc4513f7b74664acf" protocol=ttrpc version=3 Jul 10 00:25:21.137253 systemd[1]: Started cri-containerd-f72254977140b14b806d3679f09877d6cde5d059a1c4c5a453a0d77eee0c4e46.scope - libcontainer container f72254977140b14b806d3679f09877d6cde5d059a1c4c5a453a0d77eee0c4e46. Jul 10 00:25:21.163151 containerd[1724]: time="2025-07-10T00:25:21.163125732Z" level=info msg="StartContainer for \"f72254977140b14b806d3679f09877d6cde5d059a1c4c5a453a0d77eee0c4e46\" returns successfully" Jul 10 00:25:23.271171 kubelet[3140]: I0710 00:25:23.271004 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-vqxc4" podStartSLOduration=3.285528324 podStartE2EDuration="5.270986963s" podCreationTimestamp="2025-07-10 00:25:18 +0000 UTC" firstStartedPulling="2025-07-10 00:25:19.089330845 +0000 UTC m=+7.690101249" lastFinishedPulling="2025-07-10 00:25:21.074789474 +0000 UTC m=+9.675559888" observedRunningTime="2025-07-10 00:25:21.544629842 +0000 UTC m=+10.145400253" watchObservedRunningTime="2025-07-10 00:25:23.270986963 +0000 UTC m=+11.871757376" Jul 10 00:25:26.752769 sudo[2159]: pam_unix(sudo:session): session closed for user root Jul 10 00:25:26.855573 sshd[2158]: Connection closed by 10.200.16.10 port 41518 Jul 10 00:25:26.855249 sshd-session[2156]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:26.858742 systemd[1]: sshd@6-10.200.8.45:22-10.200.16.10:41518.service: Deactivated successfully. Jul 10 00:25:26.863329 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:25:26.863964 systemd[1]: session-9.scope: Consumed 3.025s CPU time, 225.7M memory peak. Jul 10 00:25:26.867812 systemd-logind[1699]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:25:26.869388 systemd-logind[1699]: Removed session 9. Jul 10 00:25:30.275069 kubelet[3140]: W0710 00:25:30.273983 3140 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4344.1.1-n-4eb7f9ac8a" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.1-n-4eb7f9ac8a' and this object Jul 10 00:25:30.275069 kubelet[3140]: E0710 00:25:30.274035 3140 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4344.1.1-n-4eb7f9ac8a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.1-n-4eb7f9ac8a' and this object" logger="UnhandledError" Jul 10 00:25:30.275069 kubelet[3140]: W0710 00:25:30.274126 3140 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4344.1.1-n-4eb7f9ac8a" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.1-n-4eb7f9ac8a' and this object Jul 10 00:25:30.275069 kubelet[3140]: E0710 00:25:30.274138 3140 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4344.1.1-n-4eb7f9ac8a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.1-n-4eb7f9ac8a' and this object" logger="UnhandledError" Jul 10 00:25:30.275069 kubelet[3140]: W0710 00:25:30.274290 3140 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4344.1.1-n-4eb7f9ac8a" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.1-n-4eb7f9ac8a' and this object Jul 10 00:25:30.275493 kubelet[3140]: E0710 00:25:30.274304 3140 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4344.1.1-n-4eb7f9ac8a\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.1-n-4eb7f9ac8a' and this object" logger="UnhandledError" Jul 10 00:25:30.277545 systemd[1]: Created slice kubepods-besteffort-pod611717d7_c416_4d59_b774_f8a9918d7703.slice - libcontainer container kubepods-besteffort-pod611717d7_c416_4d59_b774_f8a9918d7703.slice. Jul 10 00:25:30.395016 kubelet[3140]: I0710 00:25:30.394985 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/611717d7-c416-4d59-b774-f8a9918d7703-tigera-ca-bundle\") pod \"calico-typha-c9bc68fc-msvwl\" (UID: \"611717d7-c416-4d59-b774-f8a9918d7703\") " pod="calico-system/calico-typha-c9bc68fc-msvwl" Jul 10 00:25:30.395181 kubelet[3140]: I0710 00:25:30.395020 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/611717d7-c416-4d59-b774-f8a9918d7703-typha-certs\") pod \"calico-typha-c9bc68fc-msvwl\" (UID: \"611717d7-c416-4d59-b774-f8a9918d7703\") " pod="calico-system/calico-typha-c9bc68fc-msvwl" Jul 10 00:25:30.395181 kubelet[3140]: I0710 00:25:30.395044 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jss6q\" (UniqueName: \"kubernetes.io/projected/611717d7-c416-4d59-b774-f8a9918d7703-kube-api-access-jss6q\") pod \"calico-typha-c9bc68fc-msvwl\" (UID: \"611717d7-c416-4d59-b774-f8a9918d7703\") " pod="calico-system/calico-typha-c9bc68fc-msvwl" Jul 10 00:25:30.664177 systemd[1]: Created slice kubepods-besteffort-pod7d9dc960_a6d6_47f1_9e00_e2214fc697be.slice - libcontainer container kubepods-besteffort-pod7d9dc960_a6d6_47f1_9e00_e2214fc697be.slice. Jul 10 00:25:30.797429 kubelet[3140]: I0710 00:25:30.797399 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-xtables-lock\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797429 kubelet[3140]: I0710 00:25:30.797438 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-lib-modules\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797576 kubelet[3140]: I0710 00:25:30.797452 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-var-run-calico\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797576 kubelet[3140]: I0710 00:25:30.797466 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7d9dc960-a6d6-47f1-9e00-e2214fc697be-node-certs\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797576 kubelet[3140]: I0710 00:25:30.797480 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-cni-log-dir\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797576 kubelet[3140]: I0710 00:25:30.797494 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-policysync\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797576 kubelet[3140]: I0710 00:25:30.797517 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-cni-net-dir\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797678 kubelet[3140]: I0710 00:25:30.797531 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-cni-bin-dir\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797678 kubelet[3140]: I0710 00:25:30.797543 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-flexvol-driver-host\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797678 kubelet[3140]: I0710 00:25:30.797561 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72fqh\" (UniqueName: \"kubernetes.io/projected/7d9dc960-a6d6-47f1-9e00-e2214fc697be-kube-api-access-72fqh\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797678 kubelet[3140]: I0710 00:25:30.797581 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d9dc960-a6d6-47f1-9e00-e2214fc697be-tigera-ca-bundle\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.797678 kubelet[3140]: I0710 00:25:30.797599 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d9dc960-a6d6-47f1-9e00-e2214fc697be-var-lib-calico\") pod \"calico-node-z2s67\" (UID: \"7d9dc960-a6d6-47f1-9e00-e2214fc697be\") " pod="calico-system/calico-node-z2s67" Jul 10 00:25:30.903681 kubelet[3140]: E0710 00:25:30.903656 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.903681 kubelet[3140]: W0710 00:25:30.903678 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.903817 kubelet[3140]: E0710 00:25:30.903697 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.949377 kubelet[3140]: E0710 00:25:30.949196 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:30.998596 kubelet[3140]: E0710 00:25:30.998573 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.998596 kubelet[3140]: W0710 00:25:30.998591 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.998730 kubelet[3140]: E0710 00:25:30.998608 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.998730 kubelet[3140]: E0710 00:25:30.998727 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.998782 kubelet[3140]: W0710 00:25:30.998732 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.998782 kubelet[3140]: E0710 00:25:30.998740 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.998847 kubelet[3140]: E0710 00:25:30.998839 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.998847 kubelet[3140]: W0710 00:25:30.998845 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.998894 kubelet[3140]: E0710 00:25:30.998852 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.998954 kubelet[3140]: E0710 00:25:30.998942 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.998954 kubelet[3140]: W0710 00:25:30.998950 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.999009 kubelet[3140]: E0710 00:25:30.998956 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.999107 kubelet[3140]: E0710 00:25:30.999046 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.999107 kubelet[3140]: W0710 00:25:30.999052 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.999107 kubelet[3140]: E0710 00:25:30.999057 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.999214 kubelet[3140]: E0710 00:25:30.999199 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.999240 kubelet[3140]: W0710 00:25:30.999212 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.999240 kubelet[3140]: E0710 00:25:30.999223 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.999372 kubelet[3140]: E0710 00:25:30.999360 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.999372 kubelet[3140]: W0710 00:25:30.999369 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.999419 kubelet[3140]: E0710 00:25:30.999377 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.999516 kubelet[3140]: E0710 00:25:30.999492 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.999516 kubelet[3140]: W0710 00:25:30.999513 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.999578 kubelet[3140]: E0710 00:25:30.999520 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.999637 kubelet[3140]: E0710 00:25:30.999626 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.999637 kubelet[3140]: W0710 00:25:30.999635 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.999686 kubelet[3140]: E0710 00:25:30.999642 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.999783 kubelet[3140]: E0710 00:25:30.999758 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.999783 kubelet[3140]: W0710 00:25:30.999780 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.999841 kubelet[3140]: E0710 00:25:30.999788 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:30.999905 kubelet[3140]: E0710 00:25:30.999893 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:30.999905 kubelet[3140]: W0710 00:25:30.999903 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:30.999958 kubelet[3140]: E0710 00:25:30.999911 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000009 kubelet[3140]: E0710 00:25:30.999999 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000009 kubelet[3140]: W0710 00:25:31.000007 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000073 kubelet[3140]: E0710 00:25:31.000014 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000141 kubelet[3140]: E0710 00:25:31.000118 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000141 kubelet[3140]: W0710 00:25:31.000123 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000141 kubelet[3140]: E0710 00:25:31.000130 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000241 kubelet[3140]: E0710 00:25:31.000232 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000241 kubelet[3140]: W0710 00:25:31.000240 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000290 kubelet[3140]: E0710 00:25:31.000247 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000332 kubelet[3140]: E0710 00:25:31.000323 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000332 kubelet[3140]: W0710 00:25:31.000330 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000396 kubelet[3140]: E0710 00:25:31.000336 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000431 kubelet[3140]: E0710 00:25:31.000428 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000453 kubelet[3140]: W0710 00:25:31.000432 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000453 kubelet[3140]: E0710 00:25:31.000439 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000567 kubelet[3140]: E0710 00:25:31.000542 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000567 kubelet[3140]: W0710 00:25:31.000565 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000620 kubelet[3140]: E0710 00:25:31.000572 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000701 kubelet[3140]: E0710 00:25:31.000678 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000701 kubelet[3140]: W0710 00:25:31.000698 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000753 kubelet[3140]: E0710 00:25:31.000705 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000840 kubelet[3140]: E0710 00:25:31.000817 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000840 kubelet[3140]: W0710 00:25:31.000838 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000889 kubelet[3140]: E0710 00:25:31.000845 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.000944 kubelet[3140]: E0710 00:25:31.000922 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.000944 kubelet[3140]: W0710 00:25:31.000941 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.000992 kubelet[3140]: E0710 00:25:31.000947 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.001117 kubelet[3140]: E0710 00:25:31.001073 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.001117 kubelet[3140]: W0710 00:25:31.001097 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.001117 kubelet[3140]: E0710 00:25:31.001104 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.001201 kubelet[3140]: E0710 00:25:31.001198 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.001224 kubelet[3140]: W0710 00:25:31.001203 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.001224 kubelet[3140]: E0710 00:25:31.001210 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.001318 kubelet[3140]: E0710 00:25:31.001306 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.001318 kubelet[3140]: W0710 00:25:31.001314 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.001366 kubelet[3140]: E0710 00:25:31.001320 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.001415 kubelet[3140]: E0710 00:25:31.001406 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.001415 kubelet[3140]: W0710 00:25:31.001414 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.001477 kubelet[3140]: E0710 00:25:31.001420 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.001537 kubelet[3140]: E0710 00:25:31.001513 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.001560 kubelet[3140]: W0710 00:25:31.001536 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.001560 kubelet[3140]: E0710 00:25:31.001544 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.099332 kubelet[3140]: E0710 00:25:31.099310 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.099332 kubelet[3140]: W0710 00:25:31.099326 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.099458 kubelet[3140]: E0710 00:25:31.099341 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.099500 kubelet[3140]: E0710 00:25:31.099460 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.099500 kubelet[3140]: W0710 00:25:31.099466 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.099500 kubelet[3140]: E0710 00:25:31.099474 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.099598 kubelet[3140]: I0710 00:25:31.099497 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/05230daf-d269-4253-88bf-bb9982a9f5ee-socket-dir\") pod \"csi-node-driver-7txww\" (UID: \"05230daf-d269-4253-88bf-bb9982a9f5ee\") " pod="calico-system/csi-node-driver-7txww" Jul 10 00:25:31.099598 kubelet[3140]: E0710 00:25:31.099592 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.099647 kubelet[3140]: W0710 00:25:31.099599 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.099647 kubelet[3140]: E0710 00:25:31.099614 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.099647 kubelet[3140]: I0710 00:25:31.099626 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/05230daf-d269-4253-88bf-bb9982a9f5ee-varrun\") pod \"csi-node-driver-7txww\" (UID: \"05230daf-d269-4253-88bf-bb9982a9f5ee\") " pod="calico-system/csi-node-driver-7txww" Jul 10 00:25:31.099728 kubelet[3140]: E0710 00:25:31.099720 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.099749 kubelet[3140]: W0710 00:25:31.099727 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.099749 kubelet[3140]: E0710 00:25:31.099734 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.099793 kubelet[3140]: I0710 00:25:31.099749 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/05230daf-d269-4253-88bf-bb9982a9f5ee-registration-dir\") pod \"csi-node-driver-7txww\" (UID: \"05230daf-d269-4253-88bf-bb9982a9f5ee\") " pod="calico-system/csi-node-driver-7txww" Jul 10 00:25:31.099853 kubelet[3140]: E0710 00:25:31.099842 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.099853 kubelet[3140]: W0710 00:25:31.099849 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.099901 kubelet[3140]: E0710 00:25:31.099862 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.099997 kubelet[3140]: E0710 00:25:31.099971 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.099997 kubelet[3140]: W0710 00:25:31.099994 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100066 kubelet[3140]: E0710 00:25:31.100005 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.100122 kubelet[3140]: E0710 00:25:31.100117 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.100160 kubelet[3140]: W0710 00:25:31.100122 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100160 kubelet[3140]: E0710 00:25:31.100132 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.100244 kubelet[3140]: E0710 00:25:31.100217 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.100244 kubelet[3140]: W0710 00:25:31.100222 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100244 kubelet[3140]: E0710 00:25:31.100231 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.100338 kubelet[3140]: E0710 00:25:31.100316 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.100338 kubelet[3140]: W0710 00:25:31.100321 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100338 kubelet[3140]: E0710 00:25:31.100330 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.100417 kubelet[3140]: I0710 00:25:31.100346 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76z5c\" (UniqueName: \"kubernetes.io/projected/05230daf-d269-4253-88bf-bb9982a9f5ee-kube-api-access-76z5c\") pod \"csi-node-driver-7txww\" (UID: \"05230daf-d269-4253-88bf-bb9982a9f5ee\") " pod="calico-system/csi-node-driver-7txww" Jul 10 00:25:31.100440 kubelet[3140]: E0710 00:25:31.100430 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.100440 kubelet[3140]: W0710 00:25:31.100436 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100526 kubelet[3140]: E0710 00:25:31.100442 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.100551 kubelet[3140]: E0710 00:25:31.100534 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.100551 kubelet[3140]: W0710 00:25:31.100540 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100594 kubelet[3140]: E0710 00:25:31.100549 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.100662 kubelet[3140]: E0710 00:25:31.100649 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.100662 kubelet[3140]: W0710 00:25:31.100657 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100711 kubelet[3140]: E0710 00:25:31.100663 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.100754 kubelet[3140]: I0710 00:25:31.100737 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05230daf-d269-4253-88bf-bb9982a9f5ee-kubelet-dir\") pod \"csi-node-driver-7txww\" (UID: \"05230daf-d269-4253-88bf-bb9982a9f5ee\") " pod="calico-system/csi-node-driver-7txww" Jul 10 00:25:31.100778 kubelet[3140]: E0710 00:25:31.100764 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.100778 kubelet[3140]: W0710 00:25:31.100770 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100778 kubelet[3140]: E0710 00:25:31.100777 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.100861 kubelet[3140]: E0710 00:25:31.100851 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.100861 kubelet[3140]: W0710 00:25:31.100857 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.100908 kubelet[3140]: E0710 00:25:31.100863 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.101005 kubelet[3140]: E0710 00:25:31.100991 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.101005 kubelet[3140]: W0710 00:25:31.101004 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.101057 kubelet[3140]: E0710 00:25:31.101016 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.101160 kubelet[3140]: E0710 00:25:31.101136 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.101160 kubelet[3140]: W0710 00:25:31.101158 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.101221 kubelet[3140]: E0710 00:25:31.101167 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.101276 kubelet[3140]: E0710 00:25:31.101254 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.101276 kubelet[3140]: W0710 00:25:31.101274 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.101340 kubelet[3140]: E0710 00:25:31.101282 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.101404 kubelet[3140]: E0710 00:25:31.101381 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.101404 kubelet[3140]: W0710 00:25:31.101403 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.101449 kubelet[3140]: E0710 00:25:31.101417 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.101593 kubelet[3140]: E0710 00:25:31.101570 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.101593 kubelet[3140]: W0710 00:25:31.101591 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.101647 kubelet[3140]: E0710 00:25:31.101598 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.101710 kubelet[3140]: E0710 00:25:31.101700 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.101710 kubelet[3140]: W0710 00:25:31.101707 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.101757 kubelet[3140]: E0710 00:25:31.101713 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203373 kubelet[3140]: E0710 00:25:31.202419 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203373 kubelet[3140]: W0710 00:25:31.202437 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203373 kubelet[3140]: E0710 00:25:31.202451 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203373 kubelet[3140]: E0710 00:25:31.202559 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203373 kubelet[3140]: W0710 00:25:31.202565 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203373 kubelet[3140]: E0710 00:25:31.202572 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203373 kubelet[3140]: E0710 00:25:31.202671 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203373 kubelet[3140]: W0710 00:25:31.202677 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203373 kubelet[3140]: E0710 00:25:31.202684 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203373 kubelet[3140]: E0710 00:25:31.202780 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203675 kubelet[3140]: W0710 00:25:31.202785 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203675 kubelet[3140]: E0710 00:25:31.202799 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203675 kubelet[3140]: E0710 00:25:31.202892 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203675 kubelet[3140]: W0710 00:25:31.202897 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203675 kubelet[3140]: E0710 00:25:31.202908 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203675 kubelet[3140]: E0710 00:25:31.202991 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203675 kubelet[3140]: W0710 00:25:31.202996 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203675 kubelet[3140]: E0710 00:25:31.203008 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203675 kubelet[3140]: E0710 00:25:31.203121 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203675 kubelet[3140]: W0710 00:25:31.203126 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203911 kubelet[3140]: E0710 00:25:31.203139 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203911 kubelet[3140]: E0710 00:25:31.203242 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203911 kubelet[3140]: W0710 00:25:31.203247 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203911 kubelet[3140]: E0710 00:25:31.203259 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203911 kubelet[3140]: E0710 00:25:31.203359 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203911 kubelet[3140]: W0710 00:25:31.203363 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203911 kubelet[3140]: E0710 00:25:31.203373 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.203911 kubelet[3140]: E0710 00:25:31.203461 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.203911 kubelet[3140]: W0710 00:25:31.203466 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.203911 kubelet[3140]: E0710 00:25:31.203472 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.204493 kubelet[3140]: E0710 00:25:31.203570 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.204493 kubelet[3140]: W0710 00:25:31.203575 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.204493 kubelet[3140]: E0710 00:25:31.203587 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.204493 kubelet[3140]: E0710 00:25:31.203678 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.204493 kubelet[3140]: W0710 00:25:31.203683 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.204493 kubelet[3140]: E0710 00:25:31.203692 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.204493 kubelet[3140]: E0710 00:25:31.203766 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.204493 kubelet[3140]: W0710 00:25:31.203770 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.204493 kubelet[3140]: E0710 00:25:31.203781 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.204493 kubelet[3140]: E0710 00:25:31.203910 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.204714 kubelet[3140]: W0710 00:25:31.203914 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.204714 kubelet[3140]: E0710 00:25:31.203927 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.204714 kubelet[3140]: E0710 00:25:31.204022 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.204714 kubelet[3140]: W0710 00:25:31.204027 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.204714 kubelet[3140]: E0710 00:25:31.204037 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.204714 kubelet[3140]: E0710 00:25:31.204122 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.204714 kubelet[3140]: W0710 00:25:31.204128 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.204714 kubelet[3140]: E0710 00:25:31.204134 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.204714 kubelet[3140]: E0710 00:25:31.204204 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.204714 kubelet[3140]: W0710 00:25:31.204209 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.204921 kubelet[3140]: E0710 00:25:31.204215 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.205482 kubelet[3140]: E0710 00:25:31.204984 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.205482 kubelet[3140]: W0710 00:25:31.204997 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.205482 kubelet[3140]: E0710 00:25:31.205016 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.205482 kubelet[3140]: E0710 00:25:31.205140 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.205482 kubelet[3140]: W0710 00:25:31.205148 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.205482 kubelet[3140]: E0710 00:25:31.205165 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.205482 kubelet[3140]: E0710 00:25:31.205235 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.205482 kubelet[3140]: W0710 00:25:31.205240 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.205482 kubelet[3140]: E0710 00:25:31.205253 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.205482 kubelet[3140]: E0710 00:25:31.205321 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.205722 kubelet[3140]: W0710 00:25:31.205326 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.205722 kubelet[3140]: E0710 00:25:31.205337 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.205722 kubelet[3140]: E0710 00:25:31.205435 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.205722 kubelet[3140]: W0710 00:25:31.205441 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.205722 kubelet[3140]: E0710 00:25:31.205452 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.205921 kubelet[3140]: E0710 00:25:31.205879 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.205921 kubelet[3140]: W0710 00:25:31.205886 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.205921 kubelet[3140]: E0710 00:25:31.205897 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.206015 kubelet[3140]: E0710 00:25:31.205999 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.206015 kubelet[3140]: W0710 00:25:31.206009 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.206072 kubelet[3140]: E0710 00:25:31.206020 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.206137 kubelet[3140]: E0710 00:25:31.206133 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.206162 kubelet[3140]: W0710 00:25:31.206139 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.206162 kubelet[3140]: E0710 00:25:31.206152 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.206301 kubelet[3140]: E0710 00:25:31.206291 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.206327 kubelet[3140]: W0710 00:25:31.206301 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.206327 kubelet[3140]: E0710 00:25:31.206315 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.206520 kubelet[3140]: E0710 00:25:31.206416 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.206520 kubelet[3140]: W0710 00:25:31.206423 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.206520 kubelet[3140]: E0710 00:25:31.206435 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.206591 kubelet[3140]: E0710 00:25:31.206538 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.206591 kubelet[3140]: W0710 00:25:31.206544 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.206591 kubelet[3140]: E0710 00:25:31.206557 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.206885 kubelet[3140]: E0710 00:25:31.206683 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.206885 kubelet[3140]: W0710 00:25:31.206690 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.206885 kubelet[3140]: E0710 00:25:31.206702 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.207378 kubelet[3140]: E0710 00:25:31.206995 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.207378 kubelet[3140]: W0710 00:25:31.207004 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.207378 kubelet[3140]: E0710 00:25:31.207013 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.245341 kubelet[3140]: E0710 00:25:31.245222 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.245341 kubelet[3140]: W0710 00:25:31.245237 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.245341 kubelet[3140]: E0710 00:25:31.245251 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.249253 kubelet[3140]: E0710 00:25:31.249183 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.249253 kubelet[3140]: W0710 00:25:31.249197 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.249253 kubelet[3140]: E0710 00:25:31.249209 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.254425 kubelet[3140]: E0710 00:25:31.254409 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.254539 kubelet[3140]: W0710 00:25:31.254500 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.254539 kubelet[3140]: E0710 00:25:31.254516 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.304393 kubelet[3140]: E0710 00:25:31.304350 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.304393 kubelet[3140]: W0710 00:25:31.304366 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.304393 kubelet[3140]: E0710 00:25:31.304380 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.304713 kubelet[3140]: E0710 00:25:31.304503 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.304713 kubelet[3140]: W0710 00:25:31.304509 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.304713 kubelet[3140]: E0710 00:25:31.304516 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.304713 kubelet[3140]: E0710 00:25:31.304614 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.304713 kubelet[3140]: W0710 00:25:31.304618 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.304713 kubelet[3140]: E0710 00:25:31.304625 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.356454 kubelet[3140]: E0710 00:25:31.356434 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.356454 kubelet[3140]: W0710 00:25:31.356449 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.356540 kubelet[3140]: E0710 00:25:31.356461 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.359171 kubelet[3140]: E0710 00:25:31.359104 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.359171 kubelet[3140]: W0710 00:25:31.359117 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.359171 kubelet[3140]: E0710 00:25:31.359129 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.379770 kubelet[3140]: E0710 00:25:31.379755 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:31.379770 kubelet[3140]: W0710 00:25:31.379769 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:31.379878 kubelet[3140]: E0710 00:25:31.379780 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:31.482104 containerd[1724]: time="2025-07-10T00:25:31.481994148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c9bc68fc-msvwl,Uid:611717d7-c416-4d59-b774-f8a9918d7703,Namespace:calico-system,Attempt:0,}" Jul 10 00:25:31.516356 containerd[1724]: time="2025-07-10T00:25:31.516305376Z" level=info msg="connecting to shim 33eb0fe977204dd3c98ea743384c7c7e78030045d82b5e42cea664bb7133d946" address="unix:///run/containerd/s/0e8acea9cd729d4f3c65334faf5ccd30a44101e0658a18cf5753e8de64802a36" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:31.535270 systemd[1]: Started cri-containerd-33eb0fe977204dd3c98ea743384c7c7e78030045d82b5e42cea664bb7133d946.scope - libcontainer container 33eb0fe977204dd3c98ea743384c7c7e78030045d82b5e42cea664bb7133d946. Jul 10 00:25:31.567343 containerd[1724]: time="2025-07-10T00:25:31.567273258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z2s67,Uid:7d9dc960-a6d6-47f1-9e00-e2214fc697be,Namespace:calico-system,Attempt:0,}" Jul 10 00:25:31.584280 containerd[1724]: time="2025-07-10T00:25:31.584143616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c9bc68fc-msvwl,Uid:611717d7-c416-4d59-b774-f8a9918d7703,Namespace:calico-system,Attempt:0,} returns sandbox id \"33eb0fe977204dd3c98ea743384c7c7e78030045d82b5e42cea664bb7133d946\"" Jul 10 00:25:31.587121 containerd[1724]: time="2025-07-10T00:25:31.587040418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:25:31.604783 containerd[1724]: time="2025-07-10T00:25:31.604492814Z" level=info msg="connecting to shim 29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61" address="unix:///run/containerd/s/c3175e66480f74b65d9106dbfca52bc7b3c8174ce99d0cbd8010c8b84ff64440" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:31.624212 systemd[1]: Started cri-containerd-29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61.scope - libcontainer container 29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61. Jul 10 00:25:31.642325 containerd[1724]: time="2025-07-10T00:25:31.642298014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z2s67,Uid:7d9dc960-a6d6-47f1-9e00-e2214fc697be,Namespace:calico-system,Attempt:0,} returns sandbox id \"29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61\"" Jul 10 00:25:32.487241 kubelet[3140]: E0710 00:25:32.487167 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:32.770713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133531675.mount: Deactivated successfully. Jul 10 00:25:33.378051 containerd[1724]: time="2025-07-10T00:25:33.378011443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:33.380236 containerd[1724]: time="2025-07-10T00:25:33.380199551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 10 00:25:33.383223 containerd[1724]: time="2025-07-10T00:25:33.383189310Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:33.389436 containerd[1724]: time="2025-07-10T00:25:33.389384898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:33.389719 containerd[1724]: time="2025-07-10T00:25:33.389697181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.802628591s" Jul 10 00:25:33.389758 containerd[1724]: time="2025-07-10T00:25:33.389727061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 00:25:33.390550 containerd[1724]: time="2025-07-10T00:25:33.390528310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:25:33.403015 containerd[1724]: time="2025-07-10T00:25:33.401780860Z" level=info msg="CreateContainer within sandbox \"33eb0fe977204dd3c98ea743384c7c7e78030045d82b5e42cea664bb7133d946\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:25:33.417925 containerd[1724]: time="2025-07-10T00:25:33.417896222Z" level=info msg="Container 29dd58f7313af3e1f7225b4ab060f9948c61316bebd73a0ee457d666611ccead: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:33.436248 containerd[1724]: time="2025-07-10T00:25:33.436222930Z" level=info msg="CreateContainer within sandbox \"33eb0fe977204dd3c98ea743384c7c7e78030045d82b5e42cea664bb7133d946\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"29dd58f7313af3e1f7225b4ab060f9948c61316bebd73a0ee457d666611ccead\"" Jul 10 00:25:33.437097 containerd[1724]: time="2025-07-10T00:25:33.436591423Z" level=info msg="StartContainer for \"29dd58f7313af3e1f7225b4ab060f9948c61316bebd73a0ee457d666611ccead\"" Jul 10 00:25:33.437617 containerd[1724]: time="2025-07-10T00:25:33.437591874Z" level=info msg="connecting to shim 29dd58f7313af3e1f7225b4ab060f9948c61316bebd73a0ee457d666611ccead" address="unix:///run/containerd/s/0e8acea9cd729d4f3c65334faf5ccd30a44101e0658a18cf5753e8de64802a36" protocol=ttrpc version=3 Jul 10 00:25:33.461239 systemd[1]: Started cri-containerd-29dd58f7313af3e1f7225b4ab060f9948c61316bebd73a0ee457d666611ccead.scope - libcontainer container 29dd58f7313af3e1f7225b4ab060f9948c61316bebd73a0ee457d666611ccead. Jul 10 00:25:33.501410 containerd[1724]: time="2025-07-10T00:25:33.501382868Z" level=info msg="StartContainer for \"29dd58f7313af3e1f7225b4ab060f9948c61316bebd73a0ee457d666611ccead\" returns successfully" Jul 10 00:25:33.588010 kubelet[3140]: I0710 00:25:33.587813 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c9bc68fc-msvwl" podStartSLOduration=1.7841895669999999 podStartE2EDuration="3.587795279s" podCreationTimestamp="2025-07-10 00:25:30 +0000 UTC" firstStartedPulling="2025-07-10 00:25:31.586748299 +0000 UTC m=+20.187518703" lastFinishedPulling="2025-07-10 00:25:33.390354009 +0000 UTC m=+21.991124415" observedRunningTime="2025-07-10 00:25:33.587006141 +0000 UTC m=+22.187776550" watchObservedRunningTime="2025-07-10 00:25:33.587795279 +0000 UTC m=+22.188565698" Jul 10 00:25:33.619879 kubelet[3140]: E0710 00:25:33.619857 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.619879 kubelet[3140]: W0710 00:25:33.619879 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.620106 kubelet[3140]: E0710 00:25:33.619897 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.620330 kubelet[3140]: E0710 00:25:33.620316 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.620374 kubelet[3140]: W0710 00:25:33.620331 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.620374 kubelet[3140]: E0710 00:25:33.620349 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.621300 kubelet[3140]: E0710 00:25:33.621276 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.621300 kubelet[3140]: W0710 00:25:33.621292 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.621395 kubelet[3140]: E0710 00:25:33.621306 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.621502 kubelet[3140]: E0710 00:25:33.621436 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.621502 kubelet[3140]: W0710 00:25:33.621442 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.621502 kubelet[3140]: E0710 00:25:33.621450 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.621576 kubelet[3140]: E0710 00:25:33.621569 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.621576 kubelet[3140]: W0710 00:25:33.621574 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.621625 kubelet[3140]: E0710 00:25:33.621581 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.621815 kubelet[3140]: E0710 00:25:33.621665 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.621815 kubelet[3140]: W0710 00:25:33.621671 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.621815 kubelet[3140]: E0710 00:25:33.621677 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.622126 kubelet[3140]: E0710 00:25:33.622110 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.622262 kubelet[3140]: W0710 00:25:33.622127 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.622262 kubelet[3140]: E0710 00:25:33.622139 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.622319 kubelet[3140]: E0710 00:25:33.622287 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.622319 kubelet[3140]: W0710 00:25:33.622293 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.622319 kubelet[3140]: E0710 00:25:33.622301 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.622465 kubelet[3140]: E0710 00:25:33.622423 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.622465 kubelet[3140]: W0710 00:25:33.622429 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.622465 kubelet[3140]: E0710 00:25:33.622436 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.622544 kubelet[3140]: E0710 00:25:33.622516 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.622544 kubelet[3140]: W0710 00:25:33.622520 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.622544 kubelet[3140]: E0710 00:25:33.622526 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.622611 kubelet[3140]: E0710 00:25:33.622603 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.622611 kubelet[3140]: W0710 00:25:33.622607 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.622659 kubelet[3140]: E0710 00:25:33.622612 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.622935 kubelet[3140]: E0710 00:25:33.622690 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.622935 kubelet[3140]: W0710 00:25:33.622695 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.622935 kubelet[3140]: E0710 00:25:33.622700 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.622935 kubelet[3140]: E0710 00:25:33.622827 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.622935 kubelet[3140]: W0710 00:25:33.622836 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.622935 kubelet[3140]: E0710 00:25:33.622844 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.623214 kubelet[3140]: E0710 00:25:33.623012 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.623214 kubelet[3140]: W0710 00:25:33.623019 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.623214 kubelet[3140]: E0710 00:25:33.623027 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.623731 kubelet[3140]: E0710 00:25:33.623714 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.623731 kubelet[3140]: W0710 00:25:33.623730 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.624053 kubelet[3140]: E0710 00:25:33.623744 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.624053 kubelet[3140]: E0710 00:25:33.623983 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.624053 kubelet[3140]: W0710 00:25:33.623991 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.624053 kubelet[3140]: E0710 00:25:33.624001 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.624192 kubelet[3140]: E0710 00:25:33.624179 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.624192 kubelet[3140]: W0710 00:25:33.624186 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.624244 kubelet[3140]: E0710 00:25:33.624204 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.624354 kubelet[3140]: E0710 00:25:33.624343 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.624382 kubelet[3140]: W0710 00:25:33.624354 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.624382 kubelet[3140]: E0710 00:25:33.624364 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.624573 kubelet[3140]: E0710 00:25:33.624557 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.624573 kubelet[3140]: W0710 00:25:33.624569 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.625147 kubelet[3140]: E0710 00:25:33.625124 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.625334 kubelet[3140]: E0710 00:25:33.625321 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.625457 kubelet[3140]: W0710 00:25:33.625335 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.625457 kubelet[3140]: E0710 00:25:33.625448 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.625522 kubelet[3140]: E0710 00:25:33.625492 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.625522 kubelet[3140]: W0710 00:25:33.625499 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.626113 kubelet[3140]: E0710 00:25:33.625573 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.626113 kubelet[3140]: E0710 00:25:33.625646 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.626113 kubelet[3140]: W0710 00:25:33.625651 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.626113 kubelet[3140]: E0710 00:25:33.625743 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.626113 kubelet[3140]: E0710 00:25:33.625778 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.626113 kubelet[3140]: W0710 00:25:33.625783 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.626113 kubelet[3140]: E0710 00:25:33.625803 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.626392 kubelet[3140]: E0710 00:25:33.626158 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.626392 kubelet[3140]: W0710 00:25:33.626168 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.626392 kubelet[3140]: E0710 00:25:33.626194 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.627282 kubelet[3140]: E0710 00:25:33.627265 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.627282 kubelet[3140]: W0710 00:25:33.627282 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.627381 kubelet[3140]: E0710 00:25:33.627298 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.627455 kubelet[3140]: E0710 00:25:33.627445 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.627486 kubelet[3140]: W0710 00:25:33.627455 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.627539 kubelet[3140]: E0710 00:25:33.627530 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.627610 kubelet[3140]: E0710 00:25:33.627602 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.627639 kubelet[3140]: W0710 00:25:33.627610 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.627705 kubelet[3140]: E0710 00:25:33.627695 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.627739 kubelet[3140]: E0710 00:25:33.627730 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.627739 kubelet[3140]: W0710 00:25:33.627735 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.627816 kubelet[3140]: E0710 00:25:33.627806 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.628288 kubelet[3140]: E0710 00:25:33.628226 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.628288 kubelet[3140]: W0710 00:25:33.628242 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.628288 kubelet[3140]: E0710 00:25:33.628265 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.629348 kubelet[3140]: E0710 00:25:33.629111 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.629348 kubelet[3140]: W0710 00:25:33.629130 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.629348 kubelet[3140]: E0710 00:25:33.629179 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.629348 kubelet[3140]: E0710 00:25:33.629351 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.629489 kubelet[3140]: W0710 00:25:33.629358 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.629489 kubelet[3140]: E0710 00:25:33.629375 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.629549 kubelet[3140]: E0710 00:25:33.629526 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.629549 kubelet[3140]: W0710 00:25:33.629532 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.629549 kubelet[3140]: E0710 00:25:33.629540 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:33.630034 kubelet[3140]: E0710 00:25:33.630014 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:33.630219 kubelet[3140]: W0710 00:25:33.630116 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:33.630219 kubelet[3140]: E0710 00:25:33.630132 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:34.486711 kubelet[3140]: E0710 00:25:34.486608 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:34.527871 containerd[1724]: time="2025-07-10T00:25:34.527836097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:34.529992 containerd[1724]: time="2025-07-10T00:25:34.529954162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 10 00:25:34.532551 containerd[1724]: time="2025-07-10T00:25:34.532492406Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:34.537127 containerd[1724]: time="2025-07-10T00:25:34.537070829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:34.537486 containerd[1724]: time="2025-07-10T00:25:34.537389211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.146831293s" Jul 10 00:25:34.537486 containerd[1724]: time="2025-07-10T00:25:34.537417776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 00:25:34.539320 containerd[1724]: time="2025-07-10T00:25:34.539176280Z" level=info msg="CreateContainer within sandbox \"29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:25:34.554104 containerd[1724]: time="2025-07-10T00:25:34.553513839Z" level=info msg="Container 1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:34.562373 kubelet[3140]: I0710 00:25:34.562354 3140 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:25:34.570649 containerd[1724]: time="2025-07-10T00:25:34.570624575Z" level=info msg="CreateContainer within sandbox \"29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796\"" Jul 10 00:25:34.572207 containerd[1724]: time="2025-07-10T00:25:34.571029758Z" level=info msg="StartContainer for \"1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796\"" Jul 10 00:25:34.572905 containerd[1724]: time="2025-07-10T00:25:34.572872182Z" level=info msg="connecting to shim 1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796" address="unix:///run/containerd/s/c3175e66480f74b65d9106dbfca52bc7b3c8174ce99d0cbd8010c8b84ff64440" protocol=ttrpc version=3 Jul 10 00:25:34.592240 systemd[1]: Started cri-containerd-1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796.scope - libcontainer container 1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796. Jul 10 00:25:34.626690 containerd[1724]: time="2025-07-10T00:25:34.626660463Z" level=info msg="StartContainer for \"1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796\" returns successfully" Jul 10 00:25:34.632071 kubelet[3140]: E0710 00:25:34.631989 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:34.632071 kubelet[3140]: W0710 00:25:34.632009 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:34.632071 kubelet[3140]: E0710 00:25:34.632027 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:34.632775 kubelet[3140]: E0710 00:25:34.632584 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:34.632775 kubelet[3140]: W0710 00:25:34.632621 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:34.632775 kubelet[3140]: E0710 00:25:34.632640 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:34.634187 kubelet[3140]: E0710 00:25:34.633067 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:34.634187 kubelet[3140]: W0710 00:25:34.634112 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:34.634187 kubelet[3140]: E0710 00:25:34.634138 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:34.634506 kubelet[3140]: E0710 00:25:34.634468 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:25:34.634506 kubelet[3140]: W0710 00:25:34.634479 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:25:34.634506 kubelet[3140]: E0710 00:25:34.634491 3140 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:25:34.636497 systemd[1]: cri-containerd-1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796.scope: Deactivated successfully. Jul 10 00:25:34.640778 containerd[1724]: time="2025-07-10T00:25:34.640712988Z" level=info msg="received exit event container_id:\"1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796\" id:\"1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796\" pid:3823 exited_at:{seconds:1752107134 nanos:640306681}" Jul 10 00:25:34.640778 containerd[1724]: time="2025-07-10T00:25:34.640760654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796\" id:\"1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796\" pid:3823 exited_at:{seconds:1752107134 nanos:640306681}" Jul 10 00:25:34.656251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1926d8b00a0d92aff9f5345d425939fac6bb0e77870f1769e45a4770ab2a2796-rootfs.mount: Deactivated successfully. Jul 10 00:25:36.486706 kubelet[3140]: E0710 00:25:36.486660 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:37.572403 containerd[1724]: time="2025-07-10T00:25:37.572356288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:25:38.486380 kubelet[3140]: E0710 00:25:38.486317 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:40.486626 kubelet[3140]: E0710 00:25:40.486574 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:42.486183 kubelet[3140]: E0710 00:25:42.486132 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:43.878169 containerd[1724]: time="2025-07-10T00:25:43.878131482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:43.880847 containerd[1724]: time="2025-07-10T00:25:43.880805944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 00:25:43.944314 containerd[1724]: time="2025-07-10T00:25:43.943596655Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:43.989366 containerd[1724]: time="2025-07-10T00:25:43.989333563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:43.989985 containerd[1724]: time="2025-07-10T00:25:43.989965373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.417569566s" Jul 10 00:25:43.990062 containerd[1724]: time="2025-07-10T00:25:43.990051170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 00:25:43.991857 containerd[1724]: time="2025-07-10T00:25:43.991828947Z" level=info msg="CreateContainer within sandbox \"29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:25:44.181112 containerd[1724]: time="2025-07-10T00:25:44.178828272Z" level=info msg="Container 8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:44.304185 containerd[1724]: time="2025-07-10T00:25:44.304147189Z" level=info msg="CreateContainer within sandbox \"29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29\"" Jul 10 00:25:44.304506 containerd[1724]: time="2025-07-10T00:25:44.304457173Z" level=info msg="StartContainer for \"8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29\"" Jul 10 00:25:44.305882 containerd[1724]: time="2025-07-10T00:25:44.305834765Z" level=info msg="connecting to shim 8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29" address="unix:///run/containerd/s/c3175e66480f74b65d9106dbfca52bc7b3c8174ce99d0cbd8010c8b84ff64440" protocol=ttrpc version=3 Jul 10 00:25:44.325245 systemd[1]: Started cri-containerd-8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29.scope - libcontainer container 8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29. Jul 10 00:25:44.363190 containerd[1724]: time="2025-07-10T00:25:44.363120362Z" level=info msg="StartContainer for \"8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29\" returns successfully" Jul 10 00:25:44.486601 kubelet[3140]: E0710 00:25:44.486515 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:46.486298 kubelet[3140]: E0710 00:25:46.486224 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:46.611909 kubelet[3140]: I0710 00:25:46.611848 3140 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:25:48.486609 kubelet[3140]: E0710 00:25:48.486568 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:50.486655 kubelet[3140]: E0710 00:25:50.486621 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:51.549759 containerd[1724]: time="2025-07-10T00:25:51.549712559Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:25:51.551490 systemd[1]: cri-containerd-8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29.scope: Deactivated successfully. Jul 10 00:25:51.551766 systemd[1]: cri-containerd-8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29.scope: Consumed 388ms CPU time, 192.5M memory peak, 171.2M written to disk. Jul 10 00:25:51.553645 containerd[1724]: time="2025-07-10T00:25:51.553580324Z" level=info msg="received exit event container_id:\"8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29\" id:\"8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29\" pid:3892 exited_at:{seconds:1752107151 nanos:553407081}" Jul 10 00:25:51.553771 containerd[1724]: time="2025-07-10T00:25:51.553745591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29\" id:\"8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29\" pid:3892 exited_at:{seconds:1752107151 nanos:553407081}" Jul 10 00:25:51.570541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e976a7949ccf1052dc9020b6401c5d17a7e81474ffa6fe8ceb4855f52bc2e29-rootfs.mount: Deactivated successfully. Jul 10 00:25:51.607455 kubelet[3140]: I0710 00:25:51.607434 3140 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:25:51.638766 kubelet[3140]: I0710 00:25:51.638252 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnbm7\" (UniqueName: \"kubernetes.io/projected/4bca3397-d55e-47b7-bc8a-abadb4be978b-kube-api-access-lnbm7\") pod \"coredns-7c65d6cfc9-kjfdh\" (UID: \"4bca3397-d55e-47b7-bc8a-abadb4be978b\") " pod="kube-system/coredns-7c65d6cfc9-kjfdh" Jul 10 00:25:51.640334 kubelet[3140]: I0710 00:25:51.640049 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bca3397-d55e-47b7-bc8a-abadb4be978b-config-volume\") pod \"coredns-7c65d6cfc9-kjfdh\" (UID: \"4bca3397-d55e-47b7-bc8a-abadb4be978b\") " pod="kube-system/coredns-7c65d6cfc9-kjfdh" Jul 10 00:25:51.640901 systemd[1]: Created slice kubepods-burstable-pod4bca3397_d55e_47b7_bc8a_abadb4be978b.slice - libcontainer container kubepods-burstable-pod4bca3397_d55e_47b7_bc8a_abadb4be978b.slice. Jul 10 00:25:51.654552 systemd[1]: Created slice kubepods-besteffort-pod25c4ed24_29ac_4bf7_80b8_e00dec1e9317.slice - libcontainer container kubepods-besteffort-pod25c4ed24_29ac_4bf7_80b8_e00dec1e9317.slice. Jul 10 00:25:51.670407 systemd[1]: Created slice kubepods-burstable-poddb083789_01cf_4460_b9a7_87ff0739e6f2.slice - libcontainer container kubepods-burstable-poddb083789_01cf_4460_b9a7_87ff0739e6f2.slice. Jul 10 00:25:51.681250 systemd[1]: Created slice kubepods-besteffort-poddb7b726a_b736_4632_bb42_8ec5e9685499.slice - libcontainer container kubepods-besteffort-poddb7b726a_b736_4632_bb42_8ec5e9685499.slice. Jul 10 00:25:51.688020 systemd[1]: Created slice kubepods-besteffort-pod0fc092e0_ddf2_4f53_b9c5_9146b5122598.slice - libcontainer container kubepods-besteffort-pod0fc092e0_ddf2_4f53_b9c5_9146b5122598.slice. Jul 10 00:25:51.693701 systemd[1]: Created slice kubepods-besteffort-podcdaad9eb_fc74_4d00_866b_8dccf6a03968.slice - libcontainer container kubepods-besteffort-podcdaad9eb_fc74_4d00_866b_8dccf6a03968.slice. Jul 10 00:25:51.699321 systemd[1]: Created slice kubepods-besteffort-pod1312a2a6_f706_422b_9037_878bec5fe5cb.slice - libcontainer container kubepods-besteffort-pod1312a2a6_f706_422b_9037_878bec5fe5cb.slice. Jul 10 00:25:51.740996 kubelet[3140]: I0710 00:25:51.740975 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1312a2a6-f706-422b-9037-878bec5fe5cb-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-lpxzc\" (UID: \"1312a2a6-f706-422b-9037-878bec5fe5cb\") " pod="calico-system/goldmane-58fd7646b9-lpxzc" Jul 10 00:25:51.741132 kubelet[3140]: I0710 00:25:51.741121 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjjk\" (UniqueName: \"kubernetes.io/projected/1312a2a6-f706-422b-9037-878bec5fe5cb-kube-api-access-qmjjk\") pod \"goldmane-58fd7646b9-lpxzc\" (UID: \"1312a2a6-f706-422b-9037-878bec5fe5cb\") " pod="calico-system/goldmane-58fd7646b9-lpxzc" Jul 10 00:25:51.741191 kubelet[3140]: I0710 00:25:51.741183 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27clx\" (UniqueName: \"kubernetes.io/projected/0fc092e0-ddf2-4f53-b9c5-9146b5122598-kube-api-access-27clx\") pod \"calico-apiserver-7fdfb9b9b5-bdww2\" (UID: \"0fc092e0-ddf2-4f53-b9c5-9146b5122598\") " pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-bdww2" Jul 10 00:25:51.741237 kubelet[3140]: I0710 00:25:51.741230 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1312a2a6-f706-422b-9037-878bec5fe5cb-config\") pod \"goldmane-58fd7646b9-lpxzc\" (UID: \"1312a2a6-f706-422b-9037-878bec5fe5cb\") " pod="calico-system/goldmane-58fd7646b9-lpxzc" Jul 10 00:25:51.741295 kubelet[3140]: I0710 00:25:51.741286 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6cw\" (UniqueName: \"kubernetes.io/projected/cdaad9eb-fc74-4d00-866b-8dccf6a03968-kube-api-access-9n6cw\") pod \"whisker-d4d5f8477-r58pb\" (UID: \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\") " pod="calico-system/whisker-d4d5f8477-r58pb" Jul 10 00:25:51.741344 kubelet[3140]: I0710 00:25:51.741336 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84zxc\" (UniqueName: \"kubernetes.io/projected/25c4ed24-29ac-4bf7-80b8-e00dec1e9317-kube-api-access-84zxc\") pod \"calico-kube-controllers-d7c7f5cf4-vjn4s\" (UID: \"25c4ed24-29ac-4bf7-80b8-e00dec1e9317\") " pod="calico-system/calico-kube-controllers-d7c7f5cf4-vjn4s" Jul 10 00:25:51.741385 kubelet[3140]: I0710 00:25:51.741378 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0fc092e0-ddf2-4f53-b9c5-9146b5122598-calico-apiserver-certs\") pod \"calico-apiserver-7fdfb9b9b5-bdww2\" (UID: \"0fc092e0-ddf2-4f53-b9c5-9146b5122598\") " pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-bdww2" Jul 10 00:25:51.741452 kubelet[3140]: I0710 00:25:51.741429 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c4ed24-29ac-4bf7-80b8-e00dec1e9317-tigera-ca-bundle\") pod \"calico-kube-controllers-d7c7f5cf4-vjn4s\" (UID: \"25c4ed24-29ac-4bf7-80b8-e00dec1e9317\") " pod="calico-system/calico-kube-controllers-d7c7f5cf4-vjn4s" Jul 10 00:25:51.741485 kubelet[3140]: I0710 00:25:51.741450 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1312a2a6-f706-422b-9037-878bec5fe5cb-goldmane-key-pair\") pod \"goldmane-58fd7646b9-lpxzc\" (UID: \"1312a2a6-f706-422b-9037-878bec5fe5cb\") " pod="calico-system/goldmane-58fd7646b9-lpxzc" Jul 10 00:25:51.741485 kubelet[3140]: I0710 00:25:51.741466 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdaad9eb-fc74-4d00-866b-8dccf6a03968-whisker-ca-bundle\") pod \"whisker-d4d5f8477-r58pb\" (UID: \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\") " pod="calico-system/whisker-d4d5f8477-r58pb" Jul 10 00:25:51.741485 kubelet[3140]: I0710 00:25:51.741482 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db083789-01cf-4460-b9a7-87ff0739e6f2-config-volume\") pod \"coredns-7c65d6cfc9-d6fbp\" (UID: \"db083789-01cf-4460-b9a7-87ff0739e6f2\") " pod="kube-system/coredns-7c65d6cfc9-d6fbp" Jul 10 00:25:51.741588 kubelet[3140]: I0710 00:25:51.741498 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db7b726a-b736-4632-bb42-8ec5e9685499-calico-apiserver-certs\") pod \"calico-apiserver-7fdfb9b9b5-9mdgf\" (UID: \"db7b726a-b736-4632-bb42-8ec5e9685499\") " pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-9mdgf" Jul 10 00:25:51.741588 kubelet[3140]: I0710 00:25:51.741526 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cdaad9eb-fc74-4d00-866b-8dccf6a03968-whisker-backend-key-pair\") pod \"whisker-d4d5f8477-r58pb\" (UID: \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\") " pod="calico-system/whisker-d4d5f8477-r58pb" Jul 10 00:25:51.741588 kubelet[3140]: I0710 00:25:51.741544 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvb6\" (UniqueName: \"kubernetes.io/projected/db083789-01cf-4460-b9a7-87ff0739e6f2-kube-api-access-xwvb6\") pod \"coredns-7c65d6cfc9-d6fbp\" (UID: \"db083789-01cf-4460-b9a7-87ff0739e6f2\") " pod="kube-system/coredns-7c65d6cfc9-d6fbp" Jul 10 00:25:51.741588 kubelet[3140]: I0710 00:25:51.741559 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5w5k\" (UniqueName: \"kubernetes.io/projected/db7b726a-b736-4632-bb42-8ec5e9685499-kube-api-access-b5w5k\") pod \"calico-apiserver-7fdfb9b9b5-9mdgf\" (UID: \"db7b726a-b736-4632-bb42-8ec5e9685499\") " pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-9mdgf" Jul 10 00:25:51.947333 containerd[1724]: time="2025-07-10T00:25:51.947299147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kjfdh,Uid:4bca3397-d55e-47b7-bc8a-abadb4be978b,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:51.964045 containerd[1724]: time="2025-07-10T00:25:51.964003756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d7c7f5cf4-vjn4s,Uid:25c4ed24-29ac-4bf7-80b8-e00dec1e9317,Namespace:calico-system,Attempt:0,}" Jul 10 00:25:51.980603 containerd[1724]: time="2025-07-10T00:25:51.980577052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d6fbp,Uid:db083789-01cf-4460-b9a7-87ff0739e6f2,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:51.986187 containerd[1724]: time="2025-07-10T00:25:51.986153944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdfb9b9b5-9mdgf,Uid:db7b726a-b736-4632-bb42-8ec5e9685499,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:25:51.992701 containerd[1724]: time="2025-07-10T00:25:51.992679569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdfb9b9b5-bdww2,Uid:0fc092e0-ddf2-4f53-b9c5-9146b5122598,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:25:51.996091 containerd[1724]: time="2025-07-10T00:25:51.996045052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4d5f8477-r58pb,Uid:cdaad9eb-fc74-4d00-866b-8dccf6a03968,Namespace:calico-system,Attempt:0,}" Jul 10 00:25:52.001577 containerd[1724]: time="2025-07-10T00:25:52.001548012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-lpxzc,Uid:1312a2a6-f706-422b-9037-878bec5fe5cb,Namespace:calico-system,Attempt:0,}" Jul 10 00:25:52.491272 systemd[1]: Created slice kubepods-besteffort-pod05230daf_d269_4253_88bf_bb9982a9f5ee.slice - libcontainer container kubepods-besteffort-pod05230daf_d269_4253_88bf_bb9982a9f5ee.slice. Jul 10 00:25:52.493221 containerd[1724]: time="2025-07-10T00:25:52.493185738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7txww,Uid:05230daf-d269-4253-88bf-bb9982a9f5ee,Namespace:calico-system,Attempt:0,}" Jul 10 00:25:55.256452 containerd[1724]: time="2025-07-10T00:25:55.256261100Z" level=error msg="Failed to destroy network for sandbox \"d4dad5d557cad005be04caca68a43d5623d6d84cde0ca25a1bd272bad8d0c746\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.258632 systemd[1]: run-netns-cni\x2d73dd8444\x2dd58b\x2d8c98\x2dda37\x2df894db2c2afc.mount: Deactivated successfully. Jul 10 00:25:55.264371 containerd[1724]: time="2025-07-10T00:25:55.264319843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdfb9b9b5-bdww2,Uid:0fc092e0-ddf2-4f53-b9c5-9146b5122598,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dad5d557cad005be04caca68a43d5623d6d84cde0ca25a1bd272bad8d0c746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.264840 kubelet[3140]: E0710 00:25:55.264796 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dad5d557cad005be04caca68a43d5623d6d84cde0ca25a1bd272bad8d0c746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.265393 kubelet[3140]: E0710 00:25:55.264880 3140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dad5d557cad005be04caca68a43d5623d6d84cde0ca25a1bd272bad8d0c746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-bdww2" Jul 10 00:25:55.265393 kubelet[3140]: E0710 00:25:55.264903 3140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dad5d557cad005be04caca68a43d5623d6d84cde0ca25a1bd272bad8d0c746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-bdww2" Jul 10 00:25:55.265393 kubelet[3140]: E0710 00:25:55.264959 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fdfb9b9b5-bdww2_calico-apiserver(0fc092e0-ddf2-4f53-b9c5-9146b5122598)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fdfb9b9b5-bdww2_calico-apiserver(0fc092e0-ddf2-4f53-b9c5-9146b5122598)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4dad5d557cad005be04caca68a43d5623d6d84cde0ca25a1bd272bad8d0c746\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-bdww2" podUID="0fc092e0-ddf2-4f53-b9c5-9146b5122598" Jul 10 00:25:55.279977 containerd[1724]: time="2025-07-10T00:25:55.279898806Z" level=error msg="Failed to destroy network for sandbox \"d1694b81b08a5fc4f5f60514cdc5983163017574cd7e03567f1e75eb09f23b6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.283767 systemd[1]: run-netns-cni\x2dc4b7c34b\x2dd6ae\x2d259c\x2d8067\x2d1f8b77658a30.mount: Deactivated successfully. Jul 10 00:25:55.287452 containerd[1724]: time="2025-07-10T00:25:55.287350125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kjfdh,Uid:4bca3397-d55e-47b7-bc8a-abadb4be978b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1694b81b08a5fc4f5f60514cdc5983163017574cd7e03567f1e75eb09f23b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.287726 kubelet[3140]: E0710 00:25:55.287611 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1694b81b08a5fc4f5f60514cdc5983163017574cd7e03567f1e75eb09f23b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.287726 kubelet[3140]: E0710 00:25:55.287707 3140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1694b81b08a5fc4f5f60514cdc5983163017574cd7e03567f1e75eb09f23b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kjfdh" Jul 10 00:25:55.287826 kubelet[3140]: E0710 00:25:55.287728 3140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1694b81b08a5fc4f5f60514cdc5983163017574cd7e03567f1e75eb09f23b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kjfdh" Jul 10 00:25:55.288223 kubelet[3140]: E0710 00:25:55.288134 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kjfdh_kube-system(4bca3397-d55e-47b7-bc8a-abadb4be978b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kjfdh_kube-system(4bca3397-d55e-47b7-bc8a-abadb4be978b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1694b81b08a5fc4f5f60514cdc5983163017574cd7e03567f1e75eb09f23b6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kjfdh" podUID="4bca3397-d55e-47b7-bc8a-abadb4be978b" Jul 10 00:25:55.324798 containerd[1724]: time="2025-07-10T00:25:55.324767169Z" level=error msg="Failed to destroy network for sandbox \"2b5de56e91f66ae8cd44d6d5bd5ef6642bb81bf2b43484cacab64b6c510091f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.328796 containerd[1724]: time="2025-07-10T00:25:55.328737728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdfb9b9b5-9mdgf,Uid:db7b726a-b736-4632-bb42-8ec5e9685499,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b5de56e91f66ae8cd44d6d5bd5ef6642bb81bf2b43484cacab64b6c510091f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.329392 kubelet[3140]: E0710 00:25:55.328935 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b5de56e91f66ae8cd44d6d5bd5ef6642bb81bf2b43484cacab64b6c510091f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.329392 kubelet[3140]: E0710 00:25:55.329110 3140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b5de56e91f66ae8cd44d6d5bd5ef6642bb81bf2b43484cacab64b6c510091f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-9mdgf" Jul 10 00:25:55.329392 kubelet[3140]: E0710 00:25:55.329129 3140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b5de56e91f66ae8cd44d6d5bd5ef6642bb81bf2b43484cacab64b6c510091f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-9mdgf" Jul 10 00:25:55.329917 kubelet[3140]: E0710 00:25:55.329166 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fdfb9b9b5-9mdgf_calico-apiserver(db7b726a-b736-4632-bb42-8ec5e9685499)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fdfb9b9b5-9mdgf_calico-apiserver(db7b726a-b736-4632-bb42-8ec5e9685499)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b5de56e91f66ae8cd44d6d5bd5ef6642bb81bf2b43484cacab64b6c510091f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-9mdgf" podUID="db7b726a-b736-4632-bb42-8ec5e9685499" Jul 10 00:25:55.330109 systemd[1]: run-netns-cni\x2de9e55f29\x2da323\x2d605b\x2d9ef0\x2d53798509608f.mount: Deactivated successfully. Jul 10 00:25:55.343962 containerd[1724]: time="2025-07-10T00:25:55.343932560Z" level=error msg="Failed to destroy network for sandbox \"e8b19583a612a5ed3e9742b3034dec04d814c2fbcf52ca567865ea3d8f0dea50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.345705 systemd[1]: run-netns-cni\x2df1956861\x2df177\x2d09ff\x2df3bc\x2d0a570ebff019.mount: Deactivated successfully. Jul 10 00:25:55.348187 containerd[1724]: time="2025-07-10T00:25:55.348094997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d6fbp,Uid:db083789-01cf-4460-b9a7-87ff0739e6f2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b19583a612a5ed3e9742b3034dec04d814c2fbcf52ca567865ea3d8f0dea50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.348187 containerd[1724]: time="2025-07-10T00:25:55.348155564Z" level=error msg="Failed to destroy network for sandbox \"4c95e33c7408d1ab7355f5d75b86e8138014ce695d6a822a6d9a9dc63abba3eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.348318 kubelet[3140]: E0710 00:25:55.348236 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b19583a612a5ed3e9742b3034dec04d814c2fbcf52ca567865ea3d8f0dea50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.348318 kubelet[3140]: E0710 00:25:55.348276 3140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b19583a612a5ed3e9742b3034dec04d814c2fbcf52ca567865ea3d8f0dea50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d6fbp" Jul 10 00:25:55.348318 kubelet[3140]: E0710 00:25:55.348294 3140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b19583a612a5ed3e9742b3034dec04d814c2fbcf52ca567865ea3d8f0dea50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d6fbp" Jul 10 00:25:55.348468 kubelet[3140]: E0710 00:25:55.348329 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-d6fbp_kube-system(db083789-01cf-4460-b9a7-87ff0739e6f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-d6fbp_kube-system(db083789-01cf-4460-b9a7-87ff0739e6f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8b19583a612a5ed3e9742b3034dec04d814c2fbcf52ca567865ea3d8f0dea50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-d6fbp" podUID="db083789-01cf-4460-b9a7-87ff0739e6f2" Jul 10 00:25:55.350838 containerd[1724]: time="2025-07-10T00:25:55.350578277Z" level=error msg="Failed to destroy network for sandbox \"528accc051fd89e2bd5628e48fa0cd2cd602cc4db92dce54a68c7ff806bf8905\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.352722 containerd[1724]: time="2025-07-10T00:25:55.352602501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4d5f8477-r58pb,Uid:cdaad9eb-fc74-4d00-866b-8dccf6a03968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c95e33c7408d1ab7355f5d75b86e8138014ce695d6a822a6d9a9dc63abba3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.352879 containerd[1724]: time="2025-07-10T00:25:55.352658206Z" level=error msg="Failed to destroy network for sandbox \"cb89611c130a8890b4a91d6298e874c6e85cb018b6143b02f921e78f53ca303c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.353180 kubelet[3140]: E0710 00:25:55.353155 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c95e33c7408d1ab7355f5d75b86e8138014ce695d6a822a6d9a9dc63abba3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.353241 kubelet[3140]: E0710 00:25:55.353202 3140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c95e33c7408d1ab7355f5d75b86e8138014ce695d6a822a6d9a9dc63abba3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d4d5f8477-r58pb" Jul 10 00:25:55.353241 kubelet[3140]: E0710 00:25:55.353221 3140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c95e33c7408d1ab7355f5d75b86e8138014ce695d6a822a6d9a9dc63abba3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d4d5f8477-r58pb" Jul 10 00:25:55.353298 kubelet[3140]: E0710 00:25:55.353266 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d4d5f8477-r58pb_calico-system(cdaad9eb-fc74-4d00-866b-8dccf6a03968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d4d5f8477-r58pb_calico-system(cdaad9eb-fc74-4d00-866b-8dccf6a03968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c95e33c7408d1ab7355f5d75b86e8138014ce695d6a822a6d9a9dc63abba3eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d4d5f8477-r58pb" podUID="cdaad9eb-fc74-4d00-866b-8dccf6a03968" Jul 10 00:25:55.353522 containerd[1724]: time="2025-07-10T00:25:55.353483911Z" level=error msg="Failed to destroy network for sandbox \"f33c853d4ebfe7726d4c6df6a3c58bf822d5324d86487759e6c75c187bb08d30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.356378 containerd[1724]: time="2025-07-10T00:25:55.356302950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-lpxzc,Uid:1312a2a6-f706-422b-9037-878bec5fe5cb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"528accc051fd89e2bd5628e48fa0cd2cd602cc4db92dce54a68c7ff806bf8905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.356508 kubelet[3140]: E0710 00:25:55.356461 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528accc051fd89e2bd5628e48fa0cd2cd602cc4db92dce54a68c7ff806bf8905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.356581 kubelet[3140]: E0710 00:25:55.356521 3140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528accc051fd89e2bd5628e48fa0cd2cd602cc4db92dce54a68c7ff806bf8905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-lpxzc" Jul 10 00:25:55.356581 kubelet[3140]: E0710 00:25:55.356543 3140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528accc051fd89e2bd5628e48fa0cd2cd602cc4db92dce54a68c7ff806bf8905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-lpxzc" Jul 10 00:25:55.356667 kubelet[3140]: E0710 00:25:55.356578 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-lpxzc_calico-system(1312a2a6-f706-422b-9037-878bec5fe5cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-lpxzc_calico-system(1312a2a6-f706-422b-9037-878bec5fe5cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"528accc051fd89e2bd5628e48fa0cd2cd602cc4db92dce54a68c7ff806bf8905\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-lpxzc" podUID="1312a2a6-f706-422b-9037-878bec5fe5cb" Jul 10 00:25:55.359869 containerd[1724]: time="2025-07-10T00:25:55.359815373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7txww,Uid:05230daf-d269-4253-88bf-bb9982a9f5ee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb89611c130a8890b4a91d6298e874c6e85cb018b6143b02f921e78f53ca303c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.359988 kubelet[3140]: E0710 00:25:55.359963 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb89611c130a8890b4a91d6298e874c6e85cb018b6143b02f921e78f53ca303c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.360026 kubelet[3140]: E0710 00:25:55.360017 3140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb89611c130a8890b4a91d6298e874c6e85cb018b6143b02f921e78f53ca303c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7txww" Jul 10 00:25:55.360060 kubelet[3140]: E0710 00:25:55.360036 3140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb89611c130a8890b4a91d6298e874c6e85cb018b6143b02f921e78f53ca303c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7txww" Jul 10 00:25:55.360137 kubelet[3140]: E0710 00:25:55.360113 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7txww_calico-system(05230daf-d269-4253-88bf-bb9982a9f5ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7txww_calico-system(05230daf-d269-4253-88bf-bb9982a9f5ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb89611c130a8890b4a91d6298e874c6e85cb018b6143b02f921e78f53ca303c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7txww" podUID="05230daf-d269-4253-88bf-bb9982a9f5ee" Jul 10 00:25:55.366152 containerd[1724]: time="2025-07-10T00:25:55.366121128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d7c7f5cf4-vjn4s,Uid:25c4ed24-29ac-4bf7-80b8-e00dec1e9317,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33c853d4ebfe7726d4c6df6a3c58bf822d5324d86487759e6c75c187bb08d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.366305 kubelet[3140]: E0710 00:25:55.366264 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33c853d4ebfe7726d4c6df6a3c58bf822d5324d86487759e6c75c187bb08d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:25:55.366353 kubelet[3140]: E0710 00:25:55.366301 3140 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33c853d4ebfe7726d4c6df6a3c58bf822d5324d86487759e6c75c187bb08d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d7c7f5cf4-vjn4s" Jul 10 00:25:55.366353 kubelet[3140]: E0710 00:25:55.366320 3140 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33c853d4ebfe7726d4c6df6a3c58bf822d5324d86487759e6c75c187bb08d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d7c7f5cf4-vjn4s" Jul 10 00:25:55.366404 kubelet[3140]: E0710 00:25:55.366351 3140 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d7c7f5cf4-vjn4s_calico-system(25c4ed24-29ac-4bf7-80b8-e00dec1e9317)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d7c7f5cf4-vjn4s_calico-system(25c4ed24-29ac-4bf7-80b8-e00dec1e9317)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f33c853d4ebfe7726d4c6df6a3c58bf822d5324d86487759e6c75c187bb08d30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d7c7f5cf4-vjn4s" podUID="25c4ed24-29ac-4bf7-80b8-e00dec1e9317" Jul 10 00:25:55.605713 containerd[1724]: time="2025-07-10T00:25:55.605577821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:25:56.241014 systemd[1]: run-netns-cni\x2ddb2da8f0\x2d21d4\x2ddc2f\x2d8337\x2d9b427ba4e9c1.mount: Deactivated successfully. Jul 10 00:25:56.241118 systemd[1]: run-netns-cni\x2d1dcb0cc5\x2da71a\x2d814e\x2da627\x2d85ebdd4bff0c.mount: Deactivated successfully. Jul 10 00:25:56.241173 systemd[1]: run-netns-cni\x2d27684790\x2dcb06\x2dd6aa\x2df41b\x2d18387df8207e.mount: Deactivated successfully. Jul 10 00:25:56.241222 systemd[1]: run-netns-cni\x2df60cd2c1\x2de497\x2dda71\x2d9cf4\x2d509b54ad2f60.mount: Deactivated successfully. Jul 10 00:26:00.319378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596901959.mount: Deactivated successfully. Jul 10 00:26:00.343837 containerd[1724]: time="2025-07-10T00:26:00.343794035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:00.345838 containerd[1724]: time="2025-07-10T00:26:00.345802386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 00:26:00.348157 containerd[1724]: time="2025-07-10T00:26:00.348116722Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:00.350966 containerd[1724]: time="2025-07-10T00:26:00.350930342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:00.351284 containerd[1724]: time="2025-07-10T00:26:00.351169230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.74527751s" Jul 10 00:26:00.351284 containerd[1724]: time="2025-07-10T00:26:00.351197496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 00:26:00.362376 containerd[1724]: time="2025-07-10T00:26:00.362350966Z" level=info msg="CreateContainer within sandbox \"29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:26:00.377034 containerd[1724]: time="2025-07-10T00:26:00.376225946Z" level=info msg="Container 4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:00.401023 containerd[1724]: time="2025-07-10T00:26:00.400994815Z" level=info msg="CreateContainer within sandbox \"29253ba8f2f04d1508f066653f38b8e07e3b7ce647839a8eea12b8d4982e3b61\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\"" Jul 10 00:26:00.401450 containerd[1724]: time="2025-07-10T00:26:00.401406024Z" level=info msg="StartContainer for \"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\"" Jul 10 00:26:00.403069 containerd[1724]: time="2025-07-10T00:26:00.403044211Z" level=info msg="connecting to shim 4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0" address="unix:///run/containerd/s/c3175e66480f74b65d9106dbfca52bc7b3c8174ce99d0cbd8010c8b84ff64440" protocol=ttrpc version=3 Jul 10 00:26:00.424215 systemd[1]: Started cri-containerd-4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0.scope - libcontainer container 4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0. Jul 10 00:26:00.455566 containerd[1724]: time="2025-07-10T00:26:00.455539906Z" level=info msg="StartContainer for \"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\" returns successfully" Jul 10 00:26:00.645248 kubelet[3140]: I0710 00:26:00.644980 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z2s67" podStartSLOduration=1.9361497060000001 podStartE2EDuration="30.644960867s" podCreationTimestamp="2025-07-10 00:25:30 +0000 UTC" firstStartedPulling="2025-07-10 00:25:31.643077728 +0000 UTC m=+20.243848134" lastFinishedPulling="2025-07-10 00:26:00.351888899 +0000 UTC m=+48.952659295" observedRunningTime="2025-07-10 00:26:00.643968176 +0000 UTC m=+49.244738588" watchObservedRunningTime="2025-07-10 00:26:00.644960867 +0000 UTC m=+49.245731291" Jul 10 00:26:00.705577 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:26:00.705656 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:26:00.709050 containerd[1724]: time="2025-07-10T00:26:00.709017955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\" id:\"4f86537186a3934a4249f2c33793f2d474e040e906f3dc23457a0b4d96d94ea8\" pid:4212 exit_status:1 exited_at:{seconds:1752107160 nanos:708792337}" Jul 10 00:26:00.990096 kubelet[3140]: I0710 00:26:00.990050 3140 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6cw\" (UniqueName: \"kubernetes.io/projected/cdaad9eb-fc74-4d00-866b-8dccf6a03968-kube-api-access-9n6cw\") pod \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\" (UID: \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\") " Jul 10 00:26:00.990472 kubelet[3140]: I0710 00:26:00.990258 3140 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdaad9eb-fc74-4d00-866b-8dccf6a03968-whisker-ca-bundle\") pod \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\" (UID: \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\") " Jul 10 00:26:00.990472 kubelet[3140]: I0710 00:26:00.990285 3140 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cdaad9eb-fc74-4d00-866b-8dccf6a03968-whisker-backend-key-pair\") pod \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\" (UID: \"cdaad9eb-fc74-4d00-866b-8dccf6a03968\") " Jul 10 00:26:00.992460 kubelet[3140]: I0710 00:26:00.992424 3140 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdaad9eb-fc74-4d00-866b-8dccf6a03968-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cdaad9eb-fc74-4d00-866b-8dccf6a03968" (UID: "cdaad9eb-fc74-4d00-866b-8dccf6a03968"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:26:00.992664 kubelet[3140]: I0710 00:26:00.992627 3140 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdaad9eb-fc74-4d00-866b-8dccf6a03968-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cdaad9eb-fc74-4d00-866b-8dccf6a03968" (UID: "cdaad9eb-fc74-4d00-866b-8dccf6a03968"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:26:00.992702 kubelet[3140]: I0710 00:26:00.992691 3140 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdaad9eb-fc74-4d00-866b-8dccf6a03968-kube-api-access-9n6cw" (OuterVolumeSpecName: "kube-api-access-9n6cw") pod "cdaad9eb-fc74-4d00-866b-8dccf6a03968" (UID: "cdaad9eb-fc74-4d00-866b-8dccf6a03968"). InnerVolumeSpecName "kube-api-access-9n6cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:26:01.090955 kubelet[3140]: I0710 00:26:01.090917 3140 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdaad9eb-fc74-4d00-866b-8dccf6a03968-whisker-ca-bundle\") on node \"ci-4344.1.1-n-4eb7f9ac8a\" DevicePath \"\"" Jul 10 00:26:01.090955 kubelet[3140]: I0710 00:26:01.090952 3140 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cdaad9eb-fc74-4d00-866b-8dccf6a03968-whisker-backend-key-pair\") on node \"ci-4344.1.1-n-4eb7f9ac8a\" DevicePath \"\"" Jul 10 00:26:01.091115 kubelet[3140]: I0710 00:26:01.090965 3140 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6cw\" (UniqueName: \"kubernetes.io/projected/cdaad9eb-fc74-4d00-866b-8dccf6a03968-kube-api-access-9n6cw\") on node \"ci-4344.1.1-n-4eb7f9ac8a\" DevicePath \"\"" Jul 10 00:26:01.317768 systemd[1]: var-lib-kubelet-pods-cdaad9eb\x2dfc74\x2d4d00\x2d866b\x2d8dccf6a03968-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9n6cw.mount: Deactivated successfully. Jul 10 00:26:01.317916 systemd[1]: var-lib-kubelet-pods-cdaad9eb\x2dfc74\x2d4d00\x2d866b\x2d8dccf6a03968-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:26:01.498319 systemd[1]: Removed slice kubepods-besteffort-podcdaad9eb_fc74_4d00_866b_8dccf6a03968.slice - libcontainer container kubepods-besteffort-podcdaad9eb_fc74_4d00_866b_8dccf6a03968.slice. Jul 10 00:26:01.693992 systemd[1]: Created slice kubepods-besteffort-podca90398d_30c4_4c8a_b0b1_ad35b479d35a.slice - libcontainer container kubepods-besteffort-podca90398d_30c4_4c8a_b0b1_ad35b479d35a.slice. Jul 10 00:26:01.694803 kubelet[3140]: I0710 00:26:01.694608 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ca90398d-30c4-4c8a-b0b1-ad35b479d35a-whisker-backend-key-pair\") pod \"whisker-9c5c74647-sw7hg\" (UID: \"ca90398d-30c4-4c8a-b0b1-ad35b479d35a\") " pod="calico-system/whisker-9c5c74647-sw7hg" Jul 10 00:26:01.694803 kubelet[3140]: I0710 00:26:01.694645 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca90398d-30c4-4c8a-b0b1-ad35b479d35a-whisker-ca-bundle\") pod \"whisker-9c5c74647-sw7hg\" (UID: \"ca90398d-30c4-4c8a-b0b1-ad35b479d35a\") " pod="calico-system/whisker-9c5c74647-sw7hg" Jul 10 00:26:01.694803 kubelet[3140]: I0710 00:26:01.694670 3140 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkq2g\" (UniqueName: \"kubernetes.io/projected/ca90398d-30c4-4c8a-b0b1-ad35b479d35a-kube-api-access-bkq2g\") pod \"whisker-9c5c74647-sw7hg\" (UID: \"ca90398d-30c4-4c8a-b0b1-ad35b479d35a\") " pod="calico-system/whisker-9c5c74647-sw7hg" Jul 10 00:26:01.732256 containerd[1724]: time="2025-07-10T00:26:01.732207289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\" id:\"71525e5c8824b36c730e3303bdd4fef21339899d6a398588c8b75caa3f4844a3\" pid:4262 exit_status:1 exited_at:{seconds:1752107161 nanos:731961621}" Jul 10 00:26:02.000668 containerd[1724]: time="2025-07-10T00:26:02.000565590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c5c74647-sw7hg,Uid:ca90398d-30c4-4c8a-b0b1-ad35b479d35a,Namespace:calico-system,Attempt:0,}" Jul 10 00:26:02.169259 systemd-networkd[1357]: cali1409aebd997: Link UP Jul 10 00:26:02.170375 systemd-networkd[1357]: cali1409aebd997: Gained carrier Jul 10 00:26:02.191004 containerd[1724]: 2025-07-10 00:26:02.051 [INFO][4361] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:26:02.191004 containerd[1724]: 2025-07-10 00:26:02.064 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0 whisker-9c5c74647- calico-system ca90398d-30c4-4c8a-b0b1-ad35b479d35a 910 0 2025-07-10 00:26:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9c5c74647 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.1-n-4eb7f9ac8a whisker-9c5c74647-sw7hg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1409aebd997 [] [] }} ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Namespace="calico-system" Pod="whisker-9c5c74647-sw7hg" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-" Jul 10 00:26:02.191004 containerd[1724]: 2025-07-10 00:26:02.064 [INFO][4361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Namespace="calico-system" Pod="whisker-9c5c74647-sw7hg" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" Jul 10 00:26:02.191004 containerd[1724]: 2025-07-10 00:26:02.107 [INFO][4373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" HandleID="k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.107 [INFO][4373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" HandleID="k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-4eb7f9ac8a", "pod":"whisker-9c5c74647-sw7hg", "timestamp":"2025-07-10 00:26:02.107274138 +0000 UTC"}, Hostname:"ci-4344.1.1-n-4eb7f9ac8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.109 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.109 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.109 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-4eb7f9ac8a' Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.114 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.120 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.123 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.126 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191229 containerd[1724]: 2025-07-10 00:26:02.130 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191452 containerd[1724]: 2025-07-10 00:26:02.130 [INFO][4373] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.64/26 handle="k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191452 containerd[1724]: 2025-07-10 00:26:02.133 [INFO][4373] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364 Jul 10 00:26:02.191452 containerd[1724]: 2025-07-10 00:26:02.137 [INFO][4373] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.64/26 handle="k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191452 containerd[1724]: 2025-07-10 00:26:02.143 [INFO][4373] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.65/26] block=192.168.78.64/26 handle="k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191452 containerd[1724]: 2025-07-10 00:26:02.143 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.65/26] handle="k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:02.191452 containerd[1724]: 2025-07-10 00:26:02.143 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:26:02.191452 containerd[1724]: 2025-07-10 00:26:02.143 [INFO][4373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.65/26] IPv6=[] ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" HandleID="k8s-pod-network.be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" Jul 10 00:26:02.191601 containerd[1724]: 2025-07-10 00:26:02.146 [INFO][4361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Namespace="calico-system" Pod="whisker-9c5c74647-sw7hg" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0", GenerateName:"whisker-9c5c74647-", Namespace:"calico-system", SelfLink:"", UID:"ca90398d-30c4-4c8a-b0b1-ad35b479d35a", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9c5c74647", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"", Pod:"whisker-9c5c74647-sw7hg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.78.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1409aebd997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:02.191601 containerd[1724]: 2025-07-10 00:26:02.146 [INFO][4361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.65/32] ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Namespace="calico-system" Pod="whisker-9c5c74647-sw7hg" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" Jul 10 00:26:02.191692 containerd[1724]: 2025-07-10 00:26:02.146 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1409aebd997 ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Namespace="calico-system" Pod="whisker-9c5c74647-sw7hg" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" Jul 10 00:26:02.191692 containerd[1724]: 2025-07-10 00:26:02.169 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Namespace="calico-system" Pod="whisker-9c5c74647-sw7hg" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" Jul 10 00:26:02.191734 containerd[1724]: 2025-07-10 00:26:02.171 [INFO][4361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Namespace="calico-system" Pod="whisker-9c5c74647-sw7hg" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0", GenerateName:"whisker-9c5c74647-", Namespace:"calico-system", SelfLink:"", UID:"ca90398d-30c4-4c8a-b0b1-ad35b479d35a", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9c5c74647", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364", Pod:"whisker-9c5c74647-sw7hg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.78.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1409aebd997", MAC:"e6:70:41:8b:56:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:02.191795 containerd[1724]: 2025-07-10 00:26:02.186 [INFO][4361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" Namespace="calico-system" Pod="whisker-9c5c74647-sw7hg" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-whisker--9c5c74647--sw7hg-eth0" Jul 10 00:26:02.252034 containerd[1724]: time="2025-07-10T00:26:02.251504378Z" level=info msg="connecting to shim be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364" address="unix:///run/containerd/s/34a8d9c59869b1e6584dd38754e7d6fbafbb14b61fb5d0e7f42bd69f09cab4f6" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:02.285390 systemd[1]: Started cri-containerd-be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364.scope - libcontainer container be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364. Jul 10 00:26:02.373909 containerd[1724]: time="2025-07-10T00:26:02.373874777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c5c74647-sw7hg,Uid:ca90398d-30c4-4c8a-b0b1-ad35b479d35a,Namespace:calico-system,Attempt:0,} returns sandbox id \"be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364\"" Jul 10 00:26:02.375769 containerd[1724]: time="2025-07-10T00:26:02.375716931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:26:02.596508 systemd-networkd[1357]: vxlan.calico: Link UP Jul 10 00:26:02.596516 systemd-networkd[1357]: vxlan.calico: Gained carrier Jul 10 00:26:03.494331 kubelet[3140]: I0710 00:26:03.494255 3140 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdaad9eb-fc74-4d00-866b-8dccf6a03968" path="/var/lib/kubelet/pods/cdaad9eb-fc74-4d00-866b-8dccf6a03968/volumes" Jul 10 00:26:03.564822 containerd[1724]: time="2025-07-10T00:26:03.564783374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:03.566832 containerd[1724]: time="2025-07-10T00:26:03.566769202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 00:26:03.570150 containerd[1724]: time="2025-07-10T00:26:03.570128811Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:03.574326 containerd[1724]: time="2025-07-10T00:26:03.574283806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:03.574737 containerd[1724]: time="2025-07-10T00:26:03.574607468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.198864221s" Jul 10 00:26:03.574737 containerd[1724]: time="2025-07-10T00:26:03.574633884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 00:26:03.576468 containerd[1724]: time="2025-07-10T00:26:03.576442894Z" level=info msg="CreateContainer within sandbox \"be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:26:03.591110 containerd[1724]: time="2025-07-10T00:26:03.590314400Z" level=info msg="Container 1d4c5b9dac901aad45e283c54b85007de3647c9868e8f010d666600495f1057d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:03.609709 containerd[1724]: time="2025-07-10T00:26:03.609678860Z" level=info msg="CreateContainer within sandbox \"be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1d4c5b9dac901aad45e283c54b85007de3647c9868e8f010d666600495f1057d\"" Jul 10 00:26:03.610179 containerd[1724]: time="2025-07-10T00:26:03.610046274Z" level=info msg="StartContainer for \"1d4c5b9dac901aad45e283c54b85007de3647c9868e8f010d666600495f1057d\"" Jul 10 00:26:03.611271 containerd[1724]: time="2025-07-10T00:26:03.611236451Z" level=info msg="connecting to shim 1d4c5b9dac901aad45e283c54b85007de3647c9868e8f010d666600495f1057d" address="unix:///run/containerd/s/34a8d9c59869b1e6584dd38754e7d6fbafbb14b61fb5d0e7f42bd69f09cab4f6" protocol=ttrpc version=3 Jul 10 00:26:03.630216 systemd[1]: Started cri-containerd-1d4c5b9dac901aad45e283c54b85007de3647c9868e8f010d666600495f1057d.scope - libcontainer container 1d4c5b9dac901aad45e283c54b85007de3647c9868e8f010d666600495f1057d. Jul 10 00:26:03.670589 containerd[1724]: time="2025-07-10T00:26:03.670548737Z" level=info msg="StartContainer for \"1d4c5b9dac901aad45e283c54b85007de3647c9868e8f010d666600495f1057d\" returns successfully" Jul 10 00:26:03.672064 containerd[1724]: time="2025-07-10T00:26:03.672023856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:26:04.005216 systemd-networkd[1357]: vxlan.calico: Gained IPv6LL Jul 10 00:26:04.197223 systemd-networkd[1357]: cali1409aebd997: Gained IPv6LL Jul 10 00:26:05.330449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566539221.mount: Deactivated successfully. Jul 10 00:26:05.386387 containerd[1724]: time="2025-07-10T00:26:05.386323344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:05.388693 containerd[1724]: time="2025-07-10T00:26:05.388660678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 00:26:05.392361 containerd[1724]: time="2025-07-10T00:26:05.392223256Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:05.399447 containerd[1724]: time="2025-07-10T00:26:05.399163686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:05.400151 containerd[1724]: time="2025-07-10T00:26:05.400008352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.727891739s" Jul 10 00:26:05.400151 containerd[1724]: time="2025-07-10T00:26:05.400043180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 00:26:05.403862 containerd[1724]: time="2025-07-10T00:26:05.403830144Z" level=info msg="CreateContainer within sandbox \"be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:26:05.422166 containerd[1724]: time="2025-07-10T00:26:05.422139950Z" level=info msg="Container c8037478d18f6872cdfe49072e2fa93a7468dc40acad66ab44277f6449043d2c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:05.438474 containerd[1724]: time="2025-07-10T00:26:05.438448677Z" level=info msg="CreateContainer within sandbox \"be9a5226b22898f663431fd3ed480625c338af8a17eb4af21e738f794db0c364\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c8037478d18f6872cdfe49072e2fa93a7468dc40acad66ab44277f6449043d2c\"" Jul 10 00:26:05.439103 containerd[1724]: time="2025-07-10T00:26:05.438925671Z" level=info msg="StartContainer for \"c8037478d18f6872cdfe49072e2fa93a7468dc40acad66ab44277f6449043d2c\"" Jul 10 00:26:05.440175 containerd[1724]: time="2025-07-10T00:26:05.440143372Z" level=info msg="connecting to shim c8037478d18f6872cdfe49072e2fa93a7468dc40acad66ab44277f6449043d2c" address="unix:///run/containerd/s/34a8d9c59869b1e6584dd38754e7d6fbafbb14b61fb5d0e7f42bd69f09cab4f6" protocol=ttrpc version=3 Jul 10 00:26:05.456213 systemd[1]: Started cri-containerd-c8037478d18f6872cdfe49072e2fa93a7468dc40acad66ab44277f6449043d2c.scope - libcontainer container c8037478d18f6872cdfe49072e2fa93a7468dc40acad66ab44277f6449043d2c. Jul 10 00:26:05.488233 containerd[1724]: time="2025-07-10T00:26:05.488158796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kjfdh,Uid:4bca3397-d55e-47b7-bc8a-abadb4be978b,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:05.513091 containerd[1724]: time="2025-07-10T00:26:05.513038640Z" level=info msg="StartContainer for \"c8037478d18f6872cdfe49072e2fa93a7468dc40acad66ab44277f6449043d2c\" returns successfully" Jul 10 00:26:05.592889 systemd-networkd[1357]: cali1111941d876: Link UP Jul 10 00:26:05.594324 systemd-networkd[1357]: cali1111941d876: Gained carrier Jul 10 00:26:05.606373 containerd[1724]: 2025-07-10 00:26:05.537 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0 coredns-7c65d6cfc9- kube-system 4bca3397-d55e-47b7-bc8a-abadb4be978b 823 0 2025-07-10 00:25:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-n-4eb7f9ac8a coredns-7c65d6cfc9-kjfdh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1111941d876 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kjfdh" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-" Jul 10 00:26:05.606373 containerd[1724]: 2025-07-10 00:26:05.537 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kjfdh" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" Jul 10 00:26:05.606373 containerd[1724]: 2025-07-10 00:26:05.558 [INFO][4630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" HandleID="k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.558 [INFO][4630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" HandleID="k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-n-4eb7f9ac8a", "pod":"coredns-7c65d6cfc9-kjfdh", "timestamp":"2025-07-10 00:26:05.558759019 +0000 UTC"}, Hostname:"ci-4344.1.1-n-4eb7f9ac8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.559 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.559 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.559 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-4eb7f9ac8a' Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.563 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.568 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.571 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.572 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606549 containerd[1724]: 2025-07-10 00:26:05.574 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606759 containerd[1724]: 2025-07-10 00:26:05.574 [INFO][4630] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.64/26 handle="k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606759 containerd[1724]: 2025-07-10 00:26:05.575 [INFO][4630] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88 Jul 10 00:26:05.606759 containerd[1724]: 2025-07-10 00:26:05.580 [INFO][4630] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.64/26 handle="k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606759 containerd[1724]: 2025-07-10 00:26:05.587 [INFO][4630] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.66/26] block=192.168.78.64/26 handle="k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606759 containerd[1724]: 2025-07-10 00:26:05.587 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.66/26] handle="k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:05.606759 containerd[1724]: 2025-07-10 00:26:05.587 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:26:05.606759 containerd[1724]: 2025-07-10 00:26:05.587 [INFO][4630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.66/26] IPv6=[] ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" HandleID="k8s-pod-network.a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" Jul 10 00:26:05.606911 containerd[1724]: 2025-07-10 00:26:05.588 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kjfdh" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4bca3397-d55e-47b7-bc8a-abadb4be978b", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"", Pod:"coredns-7c65d6cfc9-kjfdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1111941d876", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:05.606911 containerd[1724]: 2025-07-10 00:26:05.588 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.66/32] ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kjfdh" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" Jul 10 00:26:05.606911 containerd[1724]: 2025-07-10 00:26:05.588 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1111941d876 ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kjfdh" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" Jul 10 00:26:05.606911 containerd[1724]: 2025-07-10 00:26:05.594 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kjfdh" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" Jul 10 00:26:05.606911 containerd[1724]: 2025-07-10 00:26:05.594 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kjfdh" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4bca3397-d55e-47b7-bc8a-abadb4be978b", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88", Pod:"coredns-7c65d6cfc9-kjfdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1111941d876", MAC:"46:cd:af:30:7e:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:05.606911 containerd[1724]: 2025-07-10 00:26:05.603 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kjfdh" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--kjfdh-eth0" Jul 10 00:26:05.644634 kubelet[3140]: I0710 00:26:05.644528 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-9c5c74647-sw7hg" podStartSLOduration=1.618099843 podStartE2EDuration="4.644511177s" podCreationTimestamp="2025-07-10 00:26:01 +0000 UTC" firstStartedPulling="2025-07-10 00:26:02.375075071 +0000 UTC m=+50.975845480" lastFinishedPulling="2025-07-10 00:26:05.401486399 +0000 UTC m=+54.002256814" observedRunningTime="2025-07-10 00:26:05.644459788 +0000 UTC m=+54.245230198" watchObservedRunningTime="2025-07-10 00:26:05.644511177 +0000 UTC m=+54.245281586" Jul 10 00:26:05.654357 containerd[1724]: time="2025-07-10T00:26:05.654283234Z" level=info msg="connecting to shim a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88" address="unix:///run/containerd/s/4e45933b5fa8ac7c846216b144a8f7e95b22537743dd1dbc7074d4dfd46552fe" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:05.680222 systemd[1]: Started cri-containerd-a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88.scope - libcontainer container a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88. Jul 10 00:26:05.717879 containerd[1724]: time="2025-07-10T00:26:05.717855127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kjfdh,Uid:4bca3397-d55e-47b7-bc8a-abadb4be978b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88\"" Jul 10 00:26:05.720307 containerd[1724]: time="2025-07-10T00:26:05.720238190Z" level=info msg="CreateContainer within sandbox \"a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:26:05.742352 containerd[1724]: time="2025-07-10T00:26:05.742328415Z" level=info msg="Container 48a79ad57213e846d62047455f62fac43f9ebfe9078a8f13cb10c2a0dc47eb75: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:05.752184 containerd[1724]: time="2025-07-10T00:26:05.752160247Z" level=info msg="CreateContainer within sandbox \"a0d16815b89ba2c8ccc6d4a0a93dd7822b2de8b04af83dacc55881581e435e88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48a79ad57213e846d62047455f62fac43f9ebfe9078a8f13cb10c2a0dc47eb75\"" Jul 10 00:26:05.753114 containerd[1724]: time="2025-07-10T00:26:05.752579553Z" level=info msg="StartContainer for \"48a79ad57213e846d62047455f62fac43f9ebfe9078a8f13cb10c2a0dc47eb75\"" Jul 10 00:26:05.753505 containerd[1724]: time="2025-07-10T00:26:05.753470749Z" level=info msg="connecting to shim 48a79ad57213e846d62047455f62fac43f9ebfe9078a8f13cb10c2a0dc47eb75" address="unix:///run/containerd/s/4e45933b5fa8ac7c846216b144a8f7e95b22537743dd1dbc7074d4dfd46552fe" protocol=ttrpc version=3 Jul 10 00:26:05.766212 systemd[1]: Started cri-containerd-48a79ad57213e846d62047455f62fac43f9ebfe9078a8f13cb10c2a0dc47eb75.scope - libcontainer container 48a79ad57213e846d62047455f62fac43f9ebfe9078a8f13cb10c2a0dc47eb75. Jul 10 00:26:05.788873 containerd[1724]: time="2025-07-10T00:26:05.788846876Z" level=info msg="StartContainer for \"48a79ad57213e846d62047455f62fac43f9ebfe9078a8f13cb10c2a0dc47eb75\" returns successfully" Jul 10 00:26:06.487447 containerd[1724]: time="2025-07-10T00:26:06.487393774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdfb9b9b5-bdww2,Uid:0fc092e0-ddf2-4f53-b9c5-9146b5122598,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:26:06.575015 systemd-networkd[1357]: cali2987a513acb: Link UP Jul 10 00:26:06.575899 systemd-networkd[1357]: cali2987a513acb: Gained carrier Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.522 [INFO][4731] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0 calico-apiserver-7fdfb9b9b5- calico-apiserver 0fc092e0-ddf2-4f53-b9c5-9146b5122598 832 0 2025-07-10 00:25:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fdfb9b9b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-n-4eb7f9ac8a calico-apiserver-7fdfb9b9b5-bdww2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2987a513acb [] [] }} ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-bdww2" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.523 [INFO][4731] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-bdww2" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.544 [INFO][4742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" HandleID="k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.544 [INFO][4742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" HandleID="k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-n-4eb7f9ac8a", "pod":"calico-apiserver-7fdfb9b9b5-bdww2", "timestamp":"2025-07-10 00:26:06.54473747 +0000 UTC"}, Hostname:"ci-4344.1.1-n-4eb7f9ac8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.544 [INFO][4742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.544 [INFO][4742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.544 [INFO][4742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-4eb7f9ac8a' Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.549 [INFO][4742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.552 [INFO][4742] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.555 [INFO][4742] ipam/ipam.go 511: Trying affinity for 192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.556 [INFO][4742] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.557 [INFO][4742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.557 [INFO][4742] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.64/26 handle="k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.558 [INFO][4742] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42 Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.564 [INFO][4742] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.64/26 handle="k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.570 [INFO][4742] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.67/26] block=192.168.78.64/26 handle="k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.570 [INFO][4742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.67/26] handle="k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.570 [INFO][4742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:26:06.589475 containerd[1724]: 2025-07-10 00:26:06.570 [INFO][4742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.67/26] IPv6=[] ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" HandleID="k8s-pod-network.8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" Jul 10 00:26:06.590472 containerd[1724]: 2025-07-10 00:26:06.571 [INFO][4731] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-bdww2" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0", GenerateName:"calico-apiserver-7fdfb9b9b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0fc092e0-ddf2-4f53-b9c5-9146b5122598", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdfb9b9b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"", Pod:"calico-apiserver-7fdfb9b9b5-bdww2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2987a513acb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:06.590472 containerd[1724]: 2025-07-10 00:26:06.571 [INFO][4731] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.67/32] ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-bdww2" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" Jul 10 00:26:06.590472 containerd[1724]: 2025-07-10 00:26:06.571 [INFO][4731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2987a513acb ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-bdww2" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" Jul 10 00:26:06.590472 containerd[1724]: 2025-07-10 00:26:06.576 [INFO][4731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-bdww2" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" Jul 10 00:26:06.590472 containerd[1724]: 2025-07-10 00:26:06.577 [INFO][4731] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-bdww2" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0", GenerateName:"calico-apiserver-7fdfb9b9b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0fc092e0-ddf2-4f53-b9c5-9146b5122598", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdfb9b9b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42", Pod:"calico-apiserver-7fdfb9b9b5-bdww2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2987a513acb", MAC:"8e:1d:9a:fa:11:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:06.590472 containerd[1724]: 2025-07-10 00:26:06.587 [INFO][4731] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-bdww2" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--bdww2-eth0" Jul 10 00:26:06.634010 containerd[1724]: time="2025-07-10T00:26:06.633644922Z" level=info msg="connecting to shim 8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42" address="unix:///run/containerd/s/830032f4f114c8f81b553920ce01a1146e8204f54932f78b8e3bf00596769583" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:06.659099 kubelet[3140]: I0710 00:26:06.659031 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kjfdh" podStartSLOduration=48.659013375 podStartE2EDuration="48.659013375s" podCreationTimestamp="2025-07-10 00:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:06.658456759 +0000 UTC m=+55.259227171" watchObservedRunningTime="2025-07-10 00:26:06.659013375 +0000 UTC m=+55.259783785" Jul 10 00:26:06.659514 systemd[1]: Started cri-containerd-8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42.scope - libcontainer container 8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42. Jul 10 00:26:06.724804 containerd[1724]: time="2025-07-10T00:26:06.724739883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdfb9b9b5-bdww2,Uid:0fc092e0-ddf2-4f53-b9c5-9146b5122598,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42\"" Jul 10 00:26:06.725966 containerd[1724]: time="2025-07-10T00:26:06.725932806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:26:07.487468 containerd[1724]: time="2025-07-10T00:26:07.487198800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d7c7f5cf4-vjn4s,Uid:25c4ed24-29ac-4bf7-80b8-e00dec1e9317,Namespace:calico-system,Attempt:0,}" Jul 10 00:26:07.487838 containerd[1724]: time="2025-07-10T00:26:07.487615936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdfb9b9b5-9mdgf,Uid:db7b726a-b736-4632-bb42-8ec5e9685499,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:26:07.525484 systemd-networkd[1357]: cali1111941d876: Gained IPv6LL Jul 10 00:26:07.678537 systemd-networkd[1357]: calib092ca0c4a2: Link UP Jul 10 00:26:07.678752 systemd-networkd[1357]: calib092ca0c4a2: Gained carrier Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.543 [INFO][4818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0 calico-apiserver-7fdfb9b9b5- calico-apiserver db7b726a-b736-4632-bb42-8ec5e9685499 830 0 2025-07-10 00:25:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fdfb9b9b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-n-4eb7f9ac8a calico-apiserver-7fdfb9b9b5-9mdgf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib092ca0c4a2 [] [] }} ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-9mdgf" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.544 [INFO][4818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-9mdgf" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.617 [INFO][4834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" HandleID="k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.617 [INFO][4834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" HandleID="k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-n-4eb7f9ac8a", "pod":"calico-apiserver-7fdfb9b9b5-9mdgf", "timestamp":"2025-07-10 00:26:07.617419191 +0000 UTC"}, Hostname:"ci-4344.1.1-n-4eb7f9ac8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.618 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.618 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.618 [INFO][4834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-4eb7f9ac8a' Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.625 [INFO][4834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.632 [INFO][4834] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.652 [INFO][4834] ipam/ipam.go 511: Trying affinity for 192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.653 [INFO][4834] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.656 [INFO][4834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.656 [INFO][4834] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.64/26 handle="k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.657 [INFO][4834] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747 Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.662 [INFO][4834] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.64/26 handle="k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.669 [INFO][4834] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.68/26] block=192.168.78.64/26 handle="k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.669 [INFO][4834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.68/26] handle="k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.669 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:26:07.688952 containerd[1724]: 2025-07-10 00:26:07.669 [INFO][4834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.68/26] IPv6=[] ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" HandleID="k8s-pod-network.5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" Jul 10 00:26:07.689551 containerd[1724]: 2025-07-10 00:26:07.672 [INFO][4818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-9mdgf" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0", GenerateName:"calico-apiserver-7fdfb9b9b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"db7b726a-b736-4632-bb42-8ec5e9685499", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdfb9b9b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"", Pod:"calico-apiserver-7fdfb9b9b5-9mdgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib092ca0c4a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:07.689551 containerd[1724]: 2025-07-10 00:26:07.673 [INFO][4818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.68/32] ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-9mdgf" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" Jul 10 00:26:07.689551 containerd[1724]: 2025-07-10 00:26:07.673 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib092ca0c4a2 ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-9mdgf" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" Jul 10 00:26:07.689551 containerd[1724]: 2025-07-10 00:26:07.678 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-9mdgf" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" Jul 10 00:26:07.689551 containerd[1724]: 2025-07-10 00:26:07.678 [INFO][4818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-9mdgf" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0", GenerateName:"calico-apiserver-7fdfb9b9b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"db7b726a-b736-4632-bb42-8ec5e9685499", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdfb9b9b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747", Pod:"calico-apiserver-7fdfb9b9b5-9mdgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib092ca0c4a2", MAC:"e6:f2:53:06:2d:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:07.689551 containerd[1724]: 2025-07-10 00:26:07.687 [INFO][4818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" Namespace="calico-apiserver" Pod="calico-apiserver-7fdfb9b9b5-9mdgf" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--apiserver--7fdfb9b9b5--9mdgf-eth0" Jul 10 00:26:07.729643 containerd[1724]: time="2025-07-10T00:26:07.729586846Z" level=info msg="connecting to shim 5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747" address="unix:///run/containerd/s/cc61e309bb85c9207d4b5c93ea187a1d4d144c756e4dd4de5e60fec873f940e2" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:07.752245 systemd[1]: Started cri-containerd-5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747.scope - libcontainer container 5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747. Jul 10 00:26:07.787844 systemd-networkd[1357]: calid244c3dcc07: Link UP Jul 10 00:26:07.788940 systemd-networkd[1357]: calid244c3dcc07: Gained carrier Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.534 [INFO][4808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0 calico-kube-controllers-d7c7f5cf4- calico-system 25c4ed24-29ac-4bf7-80b8-e00dec1e9317 831 0 2025-07-10 00:25:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d7c7f5cf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.1-n-4eb7f9ac8a calico-kube-controllers-d7c7f5cf4-vjn4s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid244c3dcc07 [] [] }} ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Namespace="calico-system" Pod="calico-kube-controllers-d7c7f5cf4-vjn4s" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.534 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Namespace="calico-system" Pod="calico-kube-controllers-d7c7f5cf4-vjn4s" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.619 [INFO][4832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" HandleID="k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.620 [INFO][4832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" HandleID="k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000382170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-4eb7f9ac8a", "pod":"calico-kube-controllers-d7c7f5cf4-vjn4s", "timestamp":"2025-07-10 00:26:07.618208759 +0000 UTC"}, Hostname:"ci-4344.1.1-n-4eb7f9ac8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.620 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.669 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.669 [INFO][4832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-4eb7f9ac8a' Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.729 [INFO][4832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.734 [INFO][4832] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.752 [INFO][4832] ipam/ipam.go 511: Trying affinity for 192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.755 [INFO][4832] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.757 [INFO][4832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.757 [INFO][4832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.64/26 handle="k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.759 [INFO][4832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64 Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.767 [INFO][4832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.64/26 handle="k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.779 [INFO][4832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.69/26] block=192.168.78.64/26 handle="k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.780 [INFO][4832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.69/26] handle="k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.780 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:26:07.810694 containerd[1724]: 2025-07-10 00:26:07.780 [INFO][4832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.69/26] IPv6=[] ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" HandleID="k8s-pod-network.2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" Jul 10 00:26:07.811522 containerd[1724]: 2025-07-10 00:26:07.782 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Namespace="calico-system" Pod="calico-kube-controllers-d7c7f5cf4-vjn4s" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0", GenerateName:"calico-kube-controllers-d7c7f5cf4-", Namespace:"calico-system", SelfLink:"", UID:"25c4ed24-29ac-4bf7-80b8-e00dec1e9317", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d7c7f5cf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"", Pod:"calico-kube-controllers-d7c7f5cf4-vjn4s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid244c3dcc07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:07.811522 containerd[1724]: 2025-07-10 00:26:07.782 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.69/32] ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Namespace="calico-system" Pod="calico-kube-controllers-d7c7f5cf4-vjn4s" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" Jul 10 00:26:07.811522 containerd[1724]: 2025-07-10 00:26:07.782 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid244c3dcc07 ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Namespace="calico-system" Pod="calico-kube-controllers-d7c7f5cf4-vjn4s" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" Jul 10 00:26:07.811522 containerd[1724]: 2025-07-10 00:26:07.789 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Namespace="calico-system" Pod="calico-kube-controllers-d7c7f5cf4-vjn4s" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" Jul 10 00:26:07.811522 containerd[1724]: 2025-07-10 00:26:07.790 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Namespace="calico-system" Pod="calico-kube-controllers-d7c7f5cf4-vjn4s" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0", GenerateName:"calico-kube-controllers-d7c7f5cf4-", Namespace:"calico-system", SelfLink:"", UID:"25c4ed24-29ac-4bf7-80b8-e00dec1e9317", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d7c7f5cf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64", Pod:"calico-kube-controllers-d7c7f5cf4-vjn4s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid244c3dcc07", MAC:"ca:3c:2f:82:0b:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:07.811522 containerd[1724]: 2025-07-10 00:26:07.808 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" Namespace="calico-system" Pod="calico-kube-controllers-d7c7f5cf4-vjn4s" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-calico--kube--controllers--d7c7f5cf4--vjn4s-eth0" Jul 10 00:26:07.827676 containerd[1724]: time="2025-07-10T00:26:07.827582200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdfb9b9b5-9mdgf,Uid:db7b726a-b736-4632-bb42-8ec5e9685499,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747\"" Jul 10 00:26:07.845197 systemd-networkd[1357]: cali2987a513acb: Gained IPv6LL Jul 10 00:26:07.861935 containerd[1724]: time="2025-07-10T00:26:07.861906966Z" level=info msg="connecting to shim 2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64" address="unix:///run/containerd/s/1dd0b427d023743a396c7f884dc9dc3d6b23ae557b1bbdf18589f9b67b8b9b1b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:07.887253 systemd[1]: Started cri-containerd-2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64.scope - libcontainer container 2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64. Jul 10 00:26:07.949401 containerd[1724]: time="2025-07-10T00:26:07.949380961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d7c7f5cf4-vjn4s,Uid:25c4ed24-29ac-4bf7-80b8-e00dec1e9317,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64\"" Jul 10 00:26:08.854297 containerd[1724]: time="2025-07-10T00:26:08.854247772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:08.857451 containerd[1724]: time="2025-07-10T00:26:08.857417981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 00:26:08.860142 containerd[1724]: time="2025-07-10T00:26:08.860106849Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:08.863390 containerd[1724]: time="2025-07-10T00:26:08.863344285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:08.864101 containerd[1724]: time="2025-07-10T00:26:08.863786402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.137822309s" Jul 10 00:26:08.864101 containerd[1724]: time="2025-07-10T00:26:08.863815608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:26:08.865104 containerd[1724]: time="2025-07-10T00:26:08.865036175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:26:08.866116 containerd[1724]: time="2025-07-10T00:26:08.866077084Z" level=info msg="CreateContainer within sandbox \"8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:26:08.892823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368748793.mount: Deactivated successfully. Jul 10 00:26:08.893099 containerd[1724]: time="2025-07-10T00:26:08.892925709Z" level=info msg="Container 7f2217754629f7e251c010bb758bbb3afb71334054624d11d3aa1dbd06f1c5a0: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:08.917473 containerd[1724]: time="2025-07-10T00:26:08.917445985Z" level=info msg="CreateContainer within sandbox \"8a714de5586670d181c2a5004127671de3ced22c9032bd2eb78f9d3835533c42\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7f2217754629f7e251c010bb758bbb3afb71334054624d11d3aa1dbd06f1c5a0\"" Jul 10 00:26:08.917963 containerd[1724]: time="2025-07-10T00:26:08.917807199Z" level=info msg="StartContainer for \"7f2217754629f7e251c010bb758bbb3afb71334054624d11d3aa1dbd06f1c5a0\"" Jul 10 00:26:08.919115 containerd[1724]: time="2025-07-10T00:26:08.919041377Z" level=info msg="connecting to shim 7f2217754629f7e251c010bb758bbb3afb71334054624d11d3aa1dbd06f1c5a0" address="unix:///run/containerd/s/830032f4f114c8f81b553920ce01a1146e8204f54932f78b8e3bf00596769583" protocol=ttrpc version=3 Jul 10 00:26:08.939233 systemd[1]: Started cri-containerd-7f2217754629f7e251c010bb758bbb3afb71334054624d11d3aa1dbd06f1c5a0.scope - libcontainer container 7f2217754629f7e251c010bb758bbb3afb71334054624d11d3aa1dbd06f1c5a0. Jul 10 00:26:08.995140 containerd[1724]: time="2025-07-10T00:26:08.995115345Z" level=info msg="StartContainer for \"7f2217754629f7e251c010bb758bbb3afb71334054624d11d3aa1dbd06f1c5a0\" returns successfully" Jul 10 00:26:09.194397 containerd[1724]: time="2025-07-10T00:26:09.194345984Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:09.198096 containerd[1724]: time="2025-07-10T00:26:09.197707214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:26:09.199129 containerd[1724]: time="2025-07-10T00:26:09.199108217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 334.044825ms" Jul 10 00:26:09.199211 containerd[1724]: time="2025-07-10T00:26:09.199200721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:26:09.202030 containerd[1724]: time="2025-07-10T00:26:09.201926070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:26:09.203023 containerd[1724]: time="2025-07-10T00:26:09.202999001Z" level=info msg="CreateContainer within sandbox \"5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:26:09.220508 containerd[1724]: time="2025-07-10T00:26:09.220474364Z" level=info msg="Container 99f9efe0daa570ddbb57b079025b832dae47fb39b9729347c3a781366c811d81: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:09.239185 containerd[1724]: time="2025-07-10T00:26:09.239152220Z" level=info msg="CreateContainer within sandbox \"5d73392667fa2941d8845255b3f8e7abe89548cdead2446369604facc8e0b747\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99f9efe0daa570ddbb57b079025b832dae47fb39b9729347c3a781366c811d81\"" Jul 10 00:26:09.240103 containerd[1724]: time="2025-07-10T00:26:09.239674361Z" level=info msg="StartContainer for \"99f9efe0daa570ddbb57b079025b832dae47fb39b9729347c3a781366c811d81\"" Jul 10 00:26:09.241805 containerd[1724]: time="2025-07-10T00:26:09.241664901Z" level=info msg="connecting to shim 99f9efe0daa570ddbb57b079025b832dae47fb39b9729347c3a781366c811d81" address="unix:///run/containerd/s/cc61e309bb85c9207d4b5c93ea187a1d4d144c756e4dd4de5e60fec873f940e2" protocol=ttrpc version=3 Jul 10 00:26:09.262105 systemd[1]: Started cri-containerd-99f9efe0daa570ddbb57b079025b832dae47fb39b9729347c3a781366c811d81.scope - libcontainer container 99f9efe0daa570ddbb57b079025b832dae47fb39b9729347c3a781366c811d81. Jul 10 00:26:09.320483 containerd[1724]: time="2025-07-10T00:26:09.320431818Z" level=info msg="StartContainer for \"99f9efe0daa570ddbb57b079025b832dae47fb39b9729347c3a781366c811d81\" returns successfully" Jul 10 00:26:09.488230 containerd[1724]: time="2025-07-10T00:26:09.488131346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d6fbp,Uid:db083789-01cf-4460-b9a7-87ff0739e6f2,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:09.509269 systemd-networkd[1357]: calib092ca0c4a2: Gained IPv6LL Jul 10 00:26:09.573221 systemd-networkd[1357]: calid244c3dcc07: Gained IPv6LL Jul 10 00:26:09.627007 systemd-networkd[1357]: calie5aa3a905bf: Link UP Jul 10 00:26:09.627820 systemd-networkd[1357]: calie5aa3a905bf: Gained carrier Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.553 [INFO][5044] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0 coredns-7c65d6cfc9- kube-system db083789-01cf-4460-b9a7-87ff0739e6f2 829 0 2025-07-10 00:25:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-n-4eb7f9ac8a coredns-7c65d6cfc9-d6fbp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie5aa3a905bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d6fbp" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.553 [INFO][5044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d6fbp" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.581 [INFO][5056] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" HandleID="k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.582 [INFO][5056] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" HandleID="k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-n-4eb7f9ac8a", "pod":"coredns-7c65d6cfc9-d6fbp", "timestamp":"2025-07-10 00:26:09.581921209 +0000 UTC"}, Hostname:"ci-4344.1.1-n-4eb7f9ac8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.582 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.582 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.582 [INFO][5056] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-4eb7f9ac8a' Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.589 [INFO][5056] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.595 [INFO][5056] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.599 [INFO][5056] ipam/ipam.go 511: Trying affinity for 192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.601 [INFO][5056] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.603 [INFO][5056] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.603 [INFO][5056] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.64/26 handle="k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.604 [INFO][5056] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848 Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.610 [INFO][5056] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.64/26 handle="k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.618 [INFO][5056] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.70/26] block=192.168.78.64/26 handle="k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.619 [INFO][5056] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.70/26] handle="k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.619 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:26:09.649170 containerd[1724]: 2025-07-10 00:26:09.619 [INFO][5056] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.70/26] IPv6=[] ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" HandleID="k8s-pod-network.007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" Jul 10 00:26:09.651561 containerd[1724]: 2025-07-10 00:26:09.623 [INFO][5044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d6fbp" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"db083789-01cf-4460-b9a7-87ff0739e6f2", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"", Pod:"coredns-7c65d6cfc9-d6fbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5aa3a905bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:09.651561 containerd[1724]: 2025-07-10 00:26:09.623 [INFO][5044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.70/32] ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d6fbp" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" Jul 10 00:26:09.651561 containerd[1724]: 2025-07-10 00:26:09.623 [INFO][5044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5aa3a905bf ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d6fbp" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" Jul 10 00:26:09.651561 containerd[1724]: 2025-07-10 00:26:09.629 [INFO][5044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d6fbp" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" Jul 10 00:26:09.651561 containerd[1724]: 2025-07-10 00:26:09.630 [INFO][5044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d6fbp" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"db083789-01cf-4460-b9a7-87ff0739e6f2", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848", Pod:"coredns-7c65d6cfc9-d6fbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5aa3a905bf", MAC:"32:77:2d:95:3b:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:09.651561 containerd[1724]: 2025-07-10 00:26:09.647 [INFO][5044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d6fbp" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-coredns--7c65d6cfc9--d6fbp-eth0" Jul 10 00:26:09.673726 kubelet[3140]: I0710 00:26:09.673580 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-9mdgf" podStartSLOduration=41.302732139 podStartE2EDuration="42.673563247s" podCreationTimestamp="2025-07-10 00:25:27 +0000 UTC" firstStartedPulling="2025-07-10 00:26:07.829193177 +0000 UTC m=+56.429963577" lastFinishedPulling="2025-07-10 00:26:09.20002428 +0000 UTC m=+57.800794685" observedRunningTime="2025-07-10 00:26:09.672461343 +0000 UTC m=+58.273231765" watchObservedRunningTime="2025-07-10 00:26:09.673563247 +0000 UTC m=+58.274333663" Jul 10 00:26:09.700749 containerd[1724]: time="2025-07-10T00:26:09.700696218Z" level=info msg="connecting to shim 007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848" address="unix:///run/containerd/s/23e8c1cc18d28bf15405bd9049b599f2f164a0085f55b1e8d9e55c265a257856" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:09.740255 systemd[1]: Started cri-containerd-007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848.scope - libcontainer container 007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848. Jul 10 00:26:09.800917 containerd[1724]: time="2025-07-10T00:26:09.800774722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d6fbp,Uid:db083789-01cf-4460-b9a7-87ff0739e6f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848\"" Jul 10 00:26:09.805278 containerd[1724]: time="2025-07-10T00:26:09.805250603Z" level=info msg="CreateContainer within sandbox \"007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:26:09.843681 containerd[1724]: time="2025-07-10T00:26:09.842728591Z" level=info msg="Container 4ab62df7b8e7ea6ec93b18337b284afedfd3af1505c504096d17ff964e07a528: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:09.877310 containerd[1724]: time="2025-07-10T00:26:09.877283481Z" level=info msg="CreateContainer within sandbox \"007377d445311cfc059b805b2da6978b3469391afdfdf7556d0eb8b70774b848\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ab62df7b8e7ea6ec93b18337b284afedfd3af1505c504096d17ff964e07a528\"" Jul 10 00:26:09.878452 containerd[1724]: time="2025-07-10T00:26:09.878433325Z" level=info msg="StartContainer for \"4ab62df7b8e7ea6ec93b18337b284afedfd3af1505c504096d17ff964e07a528\"" Jul 10 00:26:09.879274 containerd[1724]: time="2025-07-10T00:26:09.879252666Z" level=info msg="connecting to shim 4ab62df7b8e7ea6ec93b18337b284afedfd3af1505c504096d17ff964e07a528" address="unix:///run/containerd/s/23e8c1cc18d28bf15405bd9049b599f2f164a0085f55b1e8d9e55c265a257856" protocol=ttrpc version=3 Jul 10 00:26:09.904649 systemd[1]: Started cri-containerd-4ab62df7b8e7ea6ec93b18337b284afedfd3af1505c504096d17ff964e07a528.scope - libcontainer container 4ab62df7b8e7ea6ec93b18337b284afedfd3af1505c504096d17ff964e07a528. Jul 10 00:26:09.938871 containerd[1724]: time="2025-07-10T00:26:09.938843405Z" level=info msg="StartContainer for \"4ab62df7b8e7ea6ec93b18337b284afedfd3af1505c504096d17ff964e07a528\" returns successfully" Jul 10 00:26:10.487341 containerd[1724]: time="2025-07-10T00:26:10.487302177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7txww,Uid:05230daf-d269-4253-88bf-bb9982a9f5ee,Namespace:calico-system,Attempt:0,}" Jul 10 00:26:10.487493 containerd[1724]: time="2025-07-10T00:26:10.487302212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-lpxzc,Uid:1312a2a6-f706-422b-9037-878bec5fe5cb,Namespace:calico-system,Attempt:0,}" Jul 10 00:26:10.499119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489936478.mount: Deactivated successfully. Jul 10 00:26:10.646445 systemd-networkd[1357]: calic9320e8e6d4: Link UP Jul 10 00:26:10.649077 systemd-networkd[1357]: calic9320e8e6d4: Gained carrier Jul 10 00:26:10.666104 kubelet[3140]: I0710 00:26:10.666029 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fdfb9b9b5-bdww2" podStartSLOduration=41.527111056 podStartE2EDuration="43.666009825s" podCreationTimestamp="2025-07-10 00:25:27 +0000 UTC" firstStartedPulling="2025-07-10 00:26:06.725632235 +0000 UTC m=+55.326402631" lastFinishedPulling="2025-07-10 00:26:08.86453099 +0000 UTC m=+57.465301400" observedRunningTime="2025-07-10 00:26:09.692447765 +0000 UTC m=+58.293218196" watchObservedRunningTime="2025-07-10 00:26:10.666009825 +0000 UTC m=+59.266780237" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.554 [INFO][5163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0 csi-node-driver- calico-system 05230daf-d269-4253-88bf-bb9982a9f5ee 691 0 2025-07-10 00:25:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.1-n-4eb7f9ac8a csi-node-driver-7txww eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic9320e8e6d4 [] [] }} ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Namespace="calico-system" Pod="csi-node-driver-7txww" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.554 [INFO][5163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Namespace="calico-system" Pod="csi-node-driver-7txww" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.608 [INFO][5181] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" HandleID="k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.608 [INFO][5181] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" HandleID="k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-4eb7f9ac8a", "pod":"csi-node-driver-7txww", "timestamp":"2025-07-10 00:26:10.608172992 +0000 UTC"}, Hostname:"ci-4344.1.1-n-4eb7f9ac8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.608 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.608 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.608 [INFO][5181] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-4eb7f9ac8a' Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.616 [INFO][5181] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.619 [INFO][5181] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.623 [INFO][5181] ipam/ipam.go 511: Trying affinity for 192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.624 [INFO][5181] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.626 [INFO][5181] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.626 [INFO][5181] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.64/26 handle="k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.628 [INFO][5181] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020 Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.632 [INFO][5181] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.64/26 handle="k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.640 [INFO][5181] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.71/26] block=192.168.78.64/26 handle="k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.641 [INFO][5181] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.71/26] handle="k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.641 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:26:10.666994 containerd[1724]: 2025-07-10 00:26:10.641 [INFO][5181] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.71/26] IPv6=[] ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" HandleID="k8s-pod-network.929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" Jul 10 00:26:10.667610 containerd[1724]: 2025-07-10 00:26:10.643 [INFO][5163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Namespace="calico-system" Pod="csi-node-driver-7txww" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05230daf-d269-4253-88bf-bb9982a9f5ee", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"", Pod:"csi-node-driver-7txww", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9320e8e6d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:10.667610 containerd[1724]: 2025-07-10 00:26:10.643 [INFO][5163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.71/32] ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Namespace="calico-system" Pod="csi-node-driver-7txww" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" Jul 10 00:26:10.667610 containerd[1724]: 2025-07-10 00:26:10.643 [INFO][5163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9320e8e6d4 ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Namespace="calico-system" Pod="csi-node-driver-7txww" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" Jul 10 00:26:10.667610 containerd[1724]: 2025-07-10 00:26:10.651 [INFO][5163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Namespace="calico-system" Pod="csi-node-driver-7txww" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" Jul 10 00:26:10.667610 containerd[1724]: 2025-07-10 00:26:10.652 [INFO][5163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Namespace="calico-system" Pod="csi-node-driver-7txww" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05230daf-d269-4253-88bf-bb9982a9f5ee", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020", Pod:"csi-node-driver-7txww", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9320e8e6d4", MAC:"7e:07:d4:ec:96:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:10.667610 containerd[1724]: 2025-07-10 00:26:10.664 [INFO][5163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" Namespace="calico-system" Pod="csi-node-driver-7txww" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-csi--node--driver--7txww-eth0" Jul 10 00:26:10.674482 kubelet[3140]: I0710 00:26:10.674460 3140 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:26:10.688796 kubelet[3140]: I0710 00:26:10.688107 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d6fbp" podStartSLOduration=52.688077465 podStartE2EDuration="52.688077465s" podCreationTimestamp="2025-07-10 00:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:10.687980763 +0000 UTC m=+59.288751182" watchObservedRunningTime="2025-07-10 00:26:10.688077465 +0000 UTC m=+59.288847884" Jul 10 00:26:10.981232 systemd-networkd[1357]: calie5aa3a905bf: Gained IPv6LL Jul 10 00:26:11.013742 systemd-networkd[1357]: cali44f56f0bbce: Link UP Jul 10 00:26:11.013950 systemd-networkd[1357]: cali44f56f0bbce: Gained carrier Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.553 [INFO][5154] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0 goldmane-58fd7646b9- calico-system 1312a2a6-f706-422b-9037-878bec5fe5cb 834 0 2025-07-10 00:25:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.1-n-4eb7f9ac8a goldmane-58fd7646b9-lpxzc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali44f56f0bbce [] [] }} ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Namespace="calico-system" Pod="goldmane-58fd7646b9-lpxzc" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.553 [INFO][5154] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Namespace="calico-system" Pod="goldmane-58fd7646b9-lpxzc" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.611 [INFO][5179] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" HandleID="k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.611 [INFO][5179] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" HandleID="k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-4eb7f9ac8a", "pod":"goldmane-58fd7646b9-lpxzc", "timestamp":"2025-07-10 00:26:10.611474086 +0000 UTC"}, Hostname:"ci-4344.1.1-n-4eb7f9ac8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.611 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.641 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.641 [INFO][5179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-4eb7f9ac8a' Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.717 [INFO][5179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.724 [INFO][5179] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.728 [INFO][5179] ipam/ipam.go 511: Trying affinity for 192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.730 [INFO][5179] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.733 [INFO][5179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.64/26 host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.733 [INFO][5179] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.64/26 handle="k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.735 [INFO][5179] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778 Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:10.742 [INFO][5179] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.64/26 handle="k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:11.007 [INFO][5179] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.72/26] block=192.168.78.64/26 handle="k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:11.007 [INFO][5179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.72/26] handle="k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" host="ci-4344.1.1-n-4eb7f9ac8a" Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:11.007 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:26:11.034949 containerd[1724]: 2025-07-10 00:26:11.007 [INFO][5179] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.72/26] IPv6=[] ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" HandleID="k8s-pod-network.91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Workload="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" Jul 10 00:26:11.036359 containerd[1724]: 2025-07-10 00:26:11.009 [INFO][5154] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Namespace="calico-system" Pod="goldmane-58fd7646b9-lpxzc" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1312a2a6-f706-422b-9037-878bec5fe5cb", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"", Pod:"goldmane-58fd7646b9-lpxzc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.78.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali44f56f0bbce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:11.036359 containerd[1724]: 2025-07-10 00:26:11.009 [INFO][5154] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.72/32] ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Namespace="calico-system" Pod="goldmane-58fd7646b9-lpxzc" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" Jul 10 00:26:11.036359 containerd[1724]: 2025-07-10 00:26:11.010 [INFO][5154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44f56f0bbce ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Namespace="calico-system" Pod="goldmane-58fd7646b9-lpxzc" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" Jul 10 00:26:11.036359 containerd[1724]: 2025-07-10 00:26:11.013 [INFO][5154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Namespace="calico-system" Pod="goldmane-58fd7646b9-lpxzc" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" Jul 10 00:26:11.036359 containerd[1724]: 2025-07-10 00:26:11.014 [INFO][5154] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Namespace="calico-system" Pod="goldmane-58fd7646b9-lpxzc" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1312a2a6-f706-422b-9037-878bec5fe5cb", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 25, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-4eb7f9ac8a", ContainerID:"91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778", Pod:"goldmane-58fd7646b9-lpxzc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.78.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali44f56f0bbce", MAC:"f2:75:69:be:10:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:26:11.036359 containerd[1724]: 2025-07-10 00:26:11.030 [INFO][5154] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" Namespace="calico-system" Pod="goldmane-58fd7646b9-lpxzc" WorkloadEndpoint="ci--4344.1.1--n--4eb7f9ac8a-k8s-goldmane--58fd7646b9--lpxzc-eth0" Jul 10 00:26:11.066030 containerd[1724]: time="2025-07-10T00:26:11.065947910Z" level=info msg="connecting to shim 929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020" address="unix:///run/containerd/s/204d7a2e5bde96645ddc34ba46de4960064cf9c6ca193e125a8f33af9b305493" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:11.118070 containerd[1724]: time="2025-07-10T00:26:11.118030864Z" level=info msg="connecting to shim 91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778" address="unix:///run/containerd/s/f566f8ef0233df316b7bd51465fc7aa4eca37369df1648a31f75d9befc537ae6" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:11.121816 systemd[1]: Started cri-containerd-929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020.scope - libcontainer container 929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020. Jul 10 00:26:11.152314 systemd[1]: Started cri-containerd-91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778.scope - libcontainer container 91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778. Jul 10 00:26:11.184358 containerd[1724]: time="2025-07-10T00:26:11.184328992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7txww,Uid:05230daf-d269-4253-88bf-bb9982a9f5ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020\"" Jul 10 00:26:11.217520 containerd[1724]: time="2025-07-10T00:26:11.217496316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\" id:\"bf10eb7cac75953c44cfccaba2c39ddcae8a320746ccb9c1617e800e3205f517\" pid:5250 exited_at:{seconds:1752107171 nanos:217203742}" Jul 10 00:26:11.247875 containerd[1724]: time="2025-07-10T00:26:11.247840098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-lpxzc,Uid:1312a2a6-f706-422b-9037-878bec5fe5cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778\"" Jul 10 00:26:11.869767 containerd[1724]: time="2025-07-10T00:26:11.869721339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:11.873991 containerd[1724]: time="2025-07-10T00:26:11.873965011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 00:26:11.878480 containerd[1724]: time="2025-07-10T00:26:11.878426558Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:11.882580 containerd[1724]: time="2025-07-10T00:26:11.882523673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:11.883099 containerd[1724]: time="2025-07-10T00:26:11.882865260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.680684103s" Jul 10 00:26:11.883099 containerd[1724]: time="2025-07-10T00:26:11.882896895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 00:26:11.883772 containerd[1724]: time="2025-07-10T00:26:11.883748483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:26:11.895676 containerd[1724]: time="2025-07-10T00:26:11.895650472Z" level=info msg="CreateContainer within sandbox \"2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:26:11.922096 containerd[1724]: time="2025-07-10T00:26:11.919707925Z" level=info msg="Container c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:11.926520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325728771.mount: Deactivated successfully. Jul 10 00:26:11.935699 containerd[1724]: time="2025-07-10T00:26:11.935672266Z" level=info msg="CreateContainer within sandbox \"2a423afbc3a6be58bc926626627620fdbd987ffd45bd52d721c47bda0f49de64\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\"" Jul 10 00:26:11.936102 containerd[1724]: time="2025-07-10T00:26:11.936039626Z" level=info msg="StartContainer for \"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\"" Jul 10 00:26:11.937031 containerd[1724]: time="2025-07-10T00:26:11.936993262Z" level=info msg="connecting to shim c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e" address="unix:///run/containerd/s/1dd0b427d023743a396c7f884dc9dc3d6b23ae557b1bbdf18589f9b67b8b9b1b" protocol=ttrpc version=3 Jul 10 00:26:11.956222 systemd[1]: Started cri-containerd-c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e.scope - libcontainer container c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e. Jul 10 00:26:11.996741 containerd[1724]: time="2025-07-10T00:26:11.996712594Z" level=info msg="StartContainer for \"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\" returns successfully" Jul 10 00:26:12.453305 systemd-networkd[1357]: cali44f56f0bbce: Gained IPv6LL Jul 10 00:26:12.581217 systemd-networkd[1357]: calic9320e8e6d4: Gained IPv6LL Jul 10 00:26:12.692697 kubelet[3140]: I0710 00:26:12.692302 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d7c7f5cf4-vjn4s" podStartSLOduration=37.75897709 podStartE2EDuration="41.692284378s" podCreationTimestamp="2025-07-10 00:25:31 +0000 UTC" firstStartedPulling="2025-07-10 00:26:07.95027911 +0000 UTC m=+56.551049509" lastFinishedPulling="2025-07-10 00:26:11.883586389 +0000 UTC m=+60.484356797" observedRunningTime="2025-07-10 00:26:12.69152338 +0000 UTC m=+61.292293792" watchObservedRunningTime="2025-07-10 00:26:12.692284378 +0000 UTC m=+61.293054790" Jul 10 00:26:13.404923 containerd[1724]: time="2025-07-10T00:26:13.404876661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:13.407726 containerd[1724]: time="2025-07-10T00:26:13.407687171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 00:26:13.410827 containerd[1724]: time="2025-07-10T00:26:13.410785534Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:13.414949 containerd[1724]: time="2025-07-10T00:26:13.414901809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:13.415464 containerd[1724]: time="2025-07-10T00:26:13.415337568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.531557243s" Jul 10 00:26:13.415464 containerd[1724]: time="2025-07-10T00:26:13.415366064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 00:26:13.416664 containerd[1724]: time="2025-07-10T00:26:13.416570538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:26:13.418064 containerd[1724]: time="2025-07-10T00:26:13.418038151Z" level=info msg="CreateContainer within sandbox \"929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:26:13.437009 containerd[1724]: time="2025-07-10T00:26:13.436980980Z" level=info msg="Container cd92381ad26caede95e7535bd9a6ac3755f47485dc250b22f82e6d1cfef434b3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:13.451927 containerd[1724]: time="2025-07-10T00:26:13.451900043Z" level=info msg="CreateContainer within sandbox \"929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cd92381ad26caede95e7535bd9a6ac3755f47485dc250b22f82e6d1cfef434b3\"" Jul 10 00:26:13.453839 containerd[1724]: time="2025-07-10T00:26:13.452449199Z" level=info msg="StartContainer for \"cd92381ad26caede95e7535bd9a6ac3755f47485dc250b22f82e6d1cfef434b3\"" Jul 10 00:26:13.453839 containerd[1724]: time="2025-07-10T00:26:13.453714980Z" level=info msg="connecting to shim cd92381ad26caede95e7535bd9a6ac3755f47485dc250b22f82e6d1cfef434b3" address="unix:///run/containerd/s/204d7a2e5bde96645ddc34ba46de4960064cf9c6ca193e125a8f33af9b305493" protocol=ttrpc version=3 Jul 10 00:26:13.473269 systemd[1]: Started cri-containerd-cd92381ad26caede95e7535bd9a6ac3755f47485dc250b22f82e6d1cfef434b3.scope - libcontainer container cd92381ad26caede95e7535bd9a6ac3755f47485dc250b22f82e6d1cfef434b3. Jul 10 00:26:13.505440 containerd[1724]: time="2025-07-10T00:26:13.505403379Z" level=info msg="StartContainer for \"cd92381ad26caede95e7535bd9a6ac3755f47485dc250b22f82e6d1cfef434b3\" returns successfully" Jul 10 00:26:13.719649 containerd[1724]: time="2025-07-10T00:26:13.719474923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\" id:\"48dfa765121c3caca62d10f926a95f10f278ae00bef74303bc1ec2d24bab558f\" pid:5442 exited_at:{seconds:1752107173 nanos:719035374}" Jul 10 00:26:15.442305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484212632.mount: Deactivated successfully. Jul 10 00:26:16.464919 containerd[1724]: time="2025-07-10T00:26:16.464869974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:16.467828 containerd[1724]: time="2025-07-10T00:26:16.467782720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 00:26:16.470413 containerd[1724]: time="2025-07-10T00:26:16.470356011Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:16.476299 containerd[1724]: time="2025-07-10T00:26:16.476211002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:16.477411 containerd[1724]: time="2025-07-10T00:26:16.477297797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.060693852s" Jul 10 00:26:16.477411 containerd[1724]: time="2025-07-10T00:26:16.477328191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 00:26:16.478917 containerd[1724]: time="2025-07-10T00:26:16.478365922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:26:16.480167 containerd[1724]: time="2025-07-10T00:26:16.480138460Z" level=info msg="CreateContainer within sandbox \"91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:26:16.495594 containerd[1724]: time="2025-07-10T00:26:16.495563467Z" level=info msg="Container 2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:16.517028 containerd[1724]: time="2025-07-10T00:26:16.516991248Z" level=info msg="CreateContainer within sandbox \"91003a50b62cf47ef1c72f65ff195276c0f6ed541221929a144292f2f3164778\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\"" Jul 10 00:26:16.517466 containerd[1724]: time="2025-07-10T00:26:16.517446867Z" level=info msg="StartContainer for \"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\"" Jul 10 00:26:16.520363 containerd[1724]: time="2025-07-10T00:26:16.520320432Z" level=info msg="connecting to shim 2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4" address="unix:///run/containerd/s/f566f8ef0233df316b7bd51465fc7aa4eca37369df1648a31f75d9befc537ae6" protocol=ttrpc version=3 Jul 10 00:26:16.553418 systemd[1]: Started cri-containerd-2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4.scope - libcontainer container 2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4. Jul 10 00:26:16.726295 containerd[1724]: time="2025-07-10T00:26:16.726217771Z" level=info msg="StartContainer for \"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" returns successfully" Jul 10 00:26:17.717636 kubelet[3140]: I0710 00:26:17.716067 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-lpxzc" podStartSLOduration=43.487257753 podStartE2EDuration="48.716049613s" podCreationTimestamp="2025-07-10 00:25:29 +0000 UTC" firstStartedPulling="2025-07-10 00:26:11.24923991 +0000 UTC m=+59.850010313" lastFinishedPulling="2025-07-10 00:26:16.478031773 +0000 UTC m=+65.078802173" observedRunningTime="2025-07-10 00:26:17.714382831 +0000 UTC m=+66.315153241" watchObservedRunningTime="2025-07-10 00:26:17.716049613 +0000 UTC m=+66.316820039" Jul 10 00:26:18.008716 containerd[1724]: time="2025-07-10T00:26:18.008594883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:18.011211 containerd[1724]: time="2025-07-10T00:26:18.011177563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 00:26:18.023100 containerd[1724]: time="2025-07-10T00:26:18.022864998Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:18.027833 containerd[1724]: time="2025-07-10T00:26:18.027798012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" id:\"a59b7700772b2738be6683fdab41bbccd323d62940e24ede93c5781deb5b58b2\" pid:5511 exit_status:1 exited_at:{seconds:1752107178 nanos:27210281}" Jul 10 00:26:18.028808 containerd[1724]: time="2025-07-10T00:26:18.028782459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:18.030405 containerd[1724]: time="2025-07-10T00:26:18.030295714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.551900436s" Jul 10 00:26:18.030405 containerd[1724]: time="2025-07-10T00:26:18.030324827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 00:26:18.035129 containerd[1724]: time="2025-07-10T00:26:18.035105953Z" level=info msg="CreateContainer within sandbox \"929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:26:18.065249 containerd[1724]: time="2025-07-10T00:26:18.065219718Z" level=info msg="Container e176fc74c74e4b5fa9d9706df872cafaf1f466a1355a78616c61b2c0f87116cf: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:18.083555 containerd[1724]: time="2025-07-10T00:26:18.083527818Z" level=info msg="CreateContainer within sandbox \"929b4deee3fe57068b9b5276583d6fb3e6140ef4dd78ac5e4c386928d98b9020\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e176fc74c74e4b5fa9d9706df872cafaf1f466a1355a78616c61b2c0f87116cf\"" Jul 10 00:26:18.084162 containerd[1724]: time="2025-07-10T00:26:18.084131264Z" level=info msg="StartContainer for \"e176fc74c74e4b5fa9d9706df872cafaf1f466a1355a78616c61b2c0f87116cf\"" Jul 10 00:26:18.086255 containerd[1724]: time="2025-07-10T00:26:18.086225636Z" level=info msg="connecting to shim e176fc74c74e4b5fa9d9706df872cafaf1f466a1355a78616c61b2c0f87116cf" address="unix:///run/containerd/s/204d7a2e5bde96645ddc34ba46de4960064cf9c6ca193e125a8f33af9b305493" protocol=ttrpc version=3 Jul 10 00:26:18.111538 systemd[1]: Started cri-containerd-e176fc74c74e4b5fa9d9706df872cafaf1f466a1355a78616c61b2c0f87116cf.scope - libcontainer container e176fc74c74e4b5fa9d9706df872cafaf1f466a1355a78616c61b2c0f87116cf. Jul 10 00:26:18.173480 containerd[1724]: time="2025-07-10T00:26:18.173402382Z" level=info msg="StartContainer for \"e176fc74c74e4b5fa9d9706df872cafaf1f466a1355a78616c61b2c0f87116cf\" returns successfully" Jul 10 00:26:18.621793 kubelet[3140]: I0710 00:26:18.621446 3140 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:26:18.621793 kubelet[3140]: I0710 00:26:18.621482 3140 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:26:18.773837 containerd[1724]: time="2025-07-10T00:26:18.773798909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" id:\"8e20cb56430313c824fa1e8c99f588e352fea901650864d3fd14567e5e118ea8\" pid:5570 exit_status:1 exited_at:{seconds:1752107178 nanos:773497250}" Jul 10 00:26:21.999449 containerd[1724]: time="2025-07-10T00:26:21.999392323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\" id:\"03c711d269c60e7ebc7dc809c36e552bf3c2b6361ebac30668e1c67415b9ee78\" pid:5597 exited_at:{seconds:1752107181 nanos:999062807}" Jul 10 00:26:22.066109 containerd[1724]: time="2025-07-10T00:26:22.066047047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" id:\"851f71945f05868132f9a627a74b0910ec86d3a6adf14e0818e402b9b55ed3f7\" pid:5619 exit_status:1 exited_at:{seconds:1752107182 nanos:65826159}" Jul 10 00:26:24.481600 kubelet[3140]: I0710 00:26:24.481145 3140 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:26:24.504813 kubelet[3140]: I0710 00:26:24.504757 3140 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7txww" podStartSLOduration=47.658432732 podStartE2EDuration="54.504741047s" podCreationTimestamp="2025-07-10 00:25:30 +0000 UTC" firstStartedPulling="2025-07-10 00:26:11.185869655 +0000 UTC m=+59.786640062" lastFinishedPulling="2025-07-10 00:26:18.032177977 +0000 UTC m=+66.632948377" observedRunningTime="2025-07-10 00:26:18.720247145 +0000 UTC m=+67.321017551" watchObservedRunningTime="2025-07-10 00:26:24.504741047 +0000 UTC m=+73.105511507" Jul 10 00:26:41.083101 containerd[1724]: time="2025-07-10T00:26:41.083025455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\" id:\"2f492e4a0ac9cc1573984777b62b8cdfd7283edb688d4ffb11d2dbb1aa6dd541\" pid:5652 exited_at:{seconds:1752107201 nanos:82781709}" Jul 10 00:26:52.000403 containerd[1724]: time="2025-07-10T00:26:52.000358433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\" id:\"adac32c4278f470cb893db87de5ae638b85c0b3d195ef6f1bb6469569639cc6b\" pid:5687 exited_at:{seconds:1752107212 nanos:60930}" Jul 10 00:26:52.071016 containerd[1724]: time="2025-07-10T00:26:52.070967809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" id:\"96ff1f49502d46cdf3c9dcfb805044eab954397632448e9c4ba0b589c0bb13b4\" pid:5708 exited_at:{seconds:1752107212 nanos:70732938}" Jul 10 00:27:00.605849 containerd[1724]: time="2025-07-10T00:27:00.605791126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" id:\"c2a4340ae098fc20842d7c2370799b1b6d0032fbcf5de506aaf95db32b141c23\" pid:5735 exited_at:{seconds:1752107220 nanos:604910184}" Jul 10 00:27:03.605374 systemd[1]: Started sshd@7-10.200.8.45:22-10.200.16.10:35042.service - OpenSSH per-connection server daemon (10.200.16.10:35042). Jul 10 00:27:04.234515 sshd[5750]: Accepted publickey for core from 10.200.16.10 port 35042 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:04.235690 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:04.240024 systemd-logind[1699]: New session 10 of user core. Jul 10 00:27:04.244247 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:27:04.784265 sshd[5752]: Connection closed by 10.200.16.10 port 35042 Jul 10 00:27:04.785719 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:04.789861 systemd-logind[1699]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:27:04.790951 systemd[1]: sshd@7-10.200.8.45:22-10.200.16.10:35042.service: Deactivated successfully. Jul 10 00:27:04.793862 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:27:04.800930 systemd-logind[1699]: Removed session 10. Jul 10 00:27:09.902347 systemd[1]: Started sshd@8-10.200.8.45:22-10.200.16.10:42450.service - OpenSSH per-connection server daemon (10.200.16.10:42450). Jul 10 00:27:10.535521 sshd[5766]: Accepted publickey for core from 10.200.16.10 port 42450 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:10.536645 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:10.541025 systemd-logind[1699]: New session 11 of user core. Jul 10 00:27:10.545243 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:27:11.025865 sshd[5768]: Connection closed by 10.200.16.10 port 42450 Jul 10 00:27:11.026541 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:11.030726 systemd[1]: sshd@8-10.200.8.45:22-10.200.16.10:42450.service: Deactivated successfully. Jul 10 00:27:11.033952 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:27:11.035608 systemd-logind[1699]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:27:11.038689 systemd-logind[1699]: Removed session 11. Jul 10 00:27:11.077301 containerd[1724]: time="2025-07-10T00:27:11.077260612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\" id:\"b3719c8d05130075feadb42f3b273d4524504a1e5f757d02c1fa8125f04a9806\" pid:5789 exited_at:{seconds:1752107231 nanos:76986550}" Jul 10 00:27:16.149348 systemd[1]: Started sshd@9-10.200.8.45:22-10.200.16.10:42456.service - OpenSSH per-connection server daemon (10.200.16.10:42456). Jul 10 00:27:16.787936 sshd[5807]: Accepted publickey for core from 10.200.16.10 port 42456 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:16.788978 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:16.793261 systemd-logind[1699]: New session 12 of user core. Jul 10 00:27:16.796225 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:27:17.083982 containerd[1724]: time="2025-07-10T00:27:17.083874637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\" id:\"36df6f151bc4ad85973b0fa9aaa2b6ba3ad7b1468e900db93eb29abe59b5d6ca\" pid:5824 exited_at:{seconds:1752107237 nanos:83661418}" Jul 10 00:27:17.285906 sshd[5809]: Connection closed by 10.200.16.10 port 42456 Jul 10 00:27:17.286443 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:17.289015 systemd[1]: sshd@9-10.200.8.45:22-10.200.16.10:42456.service: Deactivated successfully. Jul 10 00:27:17.291549 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:27:17.292278 systemd-logind[1699]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:27:17.293119 systemd-logind[1699]: Removed session 12. Jul 10 00:27:17.396990 systemd[1]: Started sshd@10-10.200.8.45:22-10.200.16.10:42472.service - OpenSSH per-connection server daemon (10.200.16.10:42472). Jul 10 00:27:18.025649 sshd[5845]: Accepted publickey for core from 10.200.16.10 port 42472 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:18.026702 sshd-session[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:18.030887 systemd-logind[1699]: New session 13 of user core. Jul 10 00:27:18.033275 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:27:18.543683 sshd[5851]: Connection closed by 10.200.16.10 port 42472 Jul 10 00:27:18.544267 sshd-session[5845]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:18.547061 systemd[1]: sshd@10-10.200.8.45:22-10.200.16.10:42472.service: Deactivated successfully. Jul 10 00:27:18.548778 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:27:18.549991 systemd-logind[1699]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:27:18.551301 systemd-logind[1699]: Removed session 13. Jul 10 00:27:18.683793 systemd[1]: Started sshd@11-10.200.8.45:22-10.200.16.10:42474.service - OpenSSH per-connection server daemon (10.200.16.10:42474). Jul 10 00:27:19.311457 sshd[5861]: Accepted publickey for core from 10.200.16.10 port 42474 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:19.312662 sshd-session[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:19.316617 systemd-logind[1699]: New session 14 of user core. Jul 10 00:27:19.320215 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:27:19.803817 sshd[5865]: Connection closed by 10.200.16.10 port 42474 Jul 10 00:27:19.804354 sshd-session[5861]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:19.807939 systemd[1]: sshd@11-10.200.8.45:22-10.200.16.10:42474.service: Deactivated successfully. Jul 10 00:27:19.809686 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:27:19.810627 systemd-logind[1699]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:27:19.811848 systemd-logind[1699]: Removed session 14. Jul 10 00:27:21.999957 containerd[1724]: time="2025-07-10T00:27:21.999904458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\" id:\"d4df91c075a0ee03d887fee7bd4c8fd2d536fa894b79b914a9002ce33a4efcfa\" pid:5888 exited_at:{seconds:1752107241 nanos:999719822}" Jul 10 00:27:22.063096 containerd[1724]: time="2025-07-10T00:27:22.063055192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" id:\"f6c94ee63a155048313786ded569253b6578bae6e1d17f84d40c2d1b519f312f\" pid:5909 exited_at:{seconds:1752107242 nanos:62856341}" Jul 10 00:27:24.934626 systemd[1]: Started sshd@12-10.200.8.45:22-10.200.16.10:57230.service - OpenSSH per-connection server daemon (10.200.16.10:57230). Jul 10 00:27:25.564538 sshd[5926]: Accepted publickey for core from 10.200.16.10 port 57230 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:25.565722 sshd-session[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:25.570190 systemd-logind[1699]: New session 15 of user core. Jul 10 00:27:25.574250 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:27:26.054705 sshd[5928]: Connection closed by 10.200.16.10 port 57230 Jul 10 00:27:26.055349 sshd-session[5926]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:26.058353 systemd[1]: sshd@12-10.200.8.45:22-10.200.16.10:57230.service: Deactivated successfully. Jul 10 00:27:26.060292 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:27:26.061633 systemd-logind[1699]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:27:26.062945 systemd-logind[1699]: Removed session 15. Jul 10 00:27:26.168242 systemd[1]: Started sshd@13-10.200.8.45:22-10.200.16.10:57234.service - OpenSSH per-connection server daemon (10.200.16.10:57234). Jul 10 00:27:26.800672 sshd[5940]: Accepted publickey for core from 10.200.16.10 port 57234 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:26.801802 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:26.806296 systemd-logind[1699]: New session 16 of user core. Jul 10 00:27:26.814251 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:27:27.376041 sshd[5942]: Connection closed by 10.200.16.10 port 57234 Jul 10 00:27:27.376597 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:27.379790 systemd[1]: sshd@13-10.200.8.45:22-10.200.16.10:57234.service: Deactivated successfully. Jul 10 00:27:27.381704 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:27:27.382610 systemd-logind[1699]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:27:27.384069 systemd-logind[1699]: Removed session 16. Jul 10 00:27:27.498299 systemd[1]: Started sshd@14-10.200.8.45:22-10.200.16.10:57238.service - OpenSSH per-connection server daemon (10.200.16.10:57238). Jul 10 00:27:28.126365 sshd[5952]: Accepted publickey for core from 10.200.16.10 port 57238 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:28.127721 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:28.132198 systemd-logind[1699]: New session 17 of user core. Jul 10 00:27:28.136243 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:27:30.319584 sshd[5954]: Connection closed by 10.200.16.10 port 57238 Jul 10 00:27:30.320131 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:30.322756 systemd[1]: sshd@14-10.200.8.45:22-10.200.16.10:57238.service: Deactivated successfully. Jul 10 00:27:30.324478 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:27:30.324675 systemd[1]: session-17.scope: Consumed 457ms CPU time, 76.7M memory peak. Jul 10 00:27:30.326670 systemd-logind[1699]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:27:30.327622 systemd-logind[1699]: Removed session 17. Jul 10 00:27:30.435009 systemd[1]: Started sshd@15-10.200.8.45:22-10.200.16.10:37442.service - OpenSSH per-connection server daemon (10.200.16.10:37442). Jul 10 00:27:31.064854 sshd[5971]: Accepted publickey for core from 10.200.16.10 port 37442 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:31.066023 sshd-session[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:31.071140 systemd-logind[1699]: New session 18 of user core. Jul 10 00:27:31.077246 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:27:31.629671 sshd[5975]: Connection closed by 10.200.16.10 port 37442 Jul 10 00:27:31.630250 sshd-session[5971]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:31.633511 systemd[1]: sshd@15-10.200.8.45:22-10.200.16.10:37442.service: Deactivated successfully. Jul 10 00:27:31.635429 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:27:31.636639 systemd-logind[1699]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:27:31.637711 systemd-logind[1699]: Removed session 18. Jul 10 00:27:31.743116 systemd[1]: Started sshd@16-10.200.8.45:22-10.200.16.10:37450.service - OpenSSH per-connection server daemon (10.200.16.10:37450). Jul 10 00:27:32.370865 sshd[5985]: Accepted publickey for core from 10.200.16.10 port 37450 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:32.371972 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:32.376437 systemd-logind[1699]: New session 19 of user core. Jul 10 00:27:32.381213 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:27:32.860669 sshd[5987]: Connection closed by 10.200.16.10 port 37450 Jul 10 00:27:32.861208 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:32.864390 systemd[1]: sshd@16-10.200.8.45:22-10.200.16.10:37450.service: Deactivated successfully. Jul 10 00:27:32.866207 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:27:32.866907 systemd-logind[1699]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:27:32.868358 systemd-logind[1699]: Removed session 19. Jul 10 00:27:37.975910 systemd[1]: Started sshd@17-10.200.8.45:22-10.200.16.10:37454.service - OpenSSH per-connection server daemon (10.200.16.10:37454). Jul 10 00:27:38.606417 sshd[6011]: Accepted publickey for core from 10.200.16.10 port 37454 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:38.607558 sshd-session[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:38.611233 systemd-logind[1699]: New session 20 of user core. Jul 10 00:27:38.616232 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:27:39.118174 sshd[6027]: Connection closed by 10.200.16.10 port 37454 Jul 10 00:27:39.118458 sshd-session[6011]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:39.125330 systemd[1]: sshd@17-10.200.8.45:22-10.200.16.10:37454.service: Deactivated successfully. Jul 10 00:27:39.128902 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:27:39.132218 systemd-logind[1699]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:27:39.135538 systemd-logind[1699]: Removed session 20. Jul 10 00:27:41.073802 containerd[1724]: time="2025-07-10T00:27:41.073756567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cc0ec0fdc779eb684a699d1a276fdfb277867426df1a0f78eb9567a0786c6e0\" id:\"bd2206f417083780137d13315be2c4990be186bfa0418c570dd1db867cf5f29b\" pid:6050 exited_at:{seconds:1752107261 nanos:73501268}" Jul 10 00:27:44.230894 systemd[1]: Started sshd@18-10.200.8.45:22-10.200.16.10:33328.service - OpenSSH per-connection server daemon (10.200.16.10:33328). Jul 10 00:27:44.866961 sshd[6062]: Accepted publickey for core from 10.200.16.10 port 33328 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:44.868125 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:44.872360 systemd-logind[1699]: New session 21 of user core. Jul 10 00:27:44.876307 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:27:45.366055 sshd[6064]: Connection closed by 10.200.16.10 port 33328 Jul 10 00:27:45.366705 sshd-session[6062]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:45.370067 systemd[1]: sshd@18-10.200.8.45:22-10.200.16.10:33328.service: Deactivated successfully. Jul 10 00:27:45.371894 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:27:45.372719 systemd-logind[1699]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:27:45.373760 systemd-logind[1699]: Removed session 21. Jul 10 00:27:50.484071 systemd[1]: Started sshd@19-10.200.8.45:22-10.200.16.10:40372.service - OpenSSH per-connection server daemon (10.200.16.10:40372). Jul 10 00:27:51.124932 sshd[6078]: Accepted publickey for core from 10.200.16.10 port 40372 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:51.127040 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:51.132278 systemd-logind[1699]: New session 22 of user core. Jul 10 00:27:51.137249 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:27:51.648099 sshd[6080]: Connection closed by 10.200.16.10 port 40372 Jul 10 00:27:51.649369 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:51.653123 systemd-logind[1699]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:27:51.654372 systemd[1]: sshd@19-10.200.8.45:22-10.200.16.10:40372.service: Deactivated successfully. Jul 10 00:27:51.657026 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:27:51.660387 systemd-logind[1699]: Removed session 22. Jul 10 00:27:52.025121 containerd[1724]: time="2025-07-10T00:27:52.023049998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59ad23791329a42292bf448aa5e5e9f1cb8f4673cbbacdff9bd1521fbd1714e\" id:\"a39f03598a98f14d4f3af74d61e57a70733a17de24cab26e72c09e5d8b6b9b22\" pid:6104 exited_at:{seconds:1752107272 nanos:22379478}" Jul 10 00:27:52.094291 containerd[1724]: time="2025-07-10T00:27:52.093975267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" id:\"e0d608c4a2fa9ff92aaa468be3359a6bd30e9c94b7327f179923a2e31a87261f\" pid:6121 exited_at:{seconds:1752107272 nanos:93742613}" Jul 10 00:27:56.761362 systemd[1]: Started sshd@20-10.200.8.45:22-10.200.16.10:40376.service - OpenSSH per-connection server daemon (10.200.16.10:40376). Jul 10 00:27:57.402850 sshd[6135]: Accepted publickey for core from 10.200.16.10 port 40376 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:57.403942 sshd-session[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:57.408106 systemd-logind[1699]: New session 23 of user core. Jul 10 00:27:57.415273 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:27:57.974534 sshd[6137]: Connection closed by 10.200.16.10 port 40376 Jul 10 00:27:57.975154 sshd-session[6135]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:57.978464 systemd[1]: sshd@20-10.200.8.45:22-10.200.16.10:40376.service: Deactivated successfully. Jul 10 00:27:57.980180 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:27:57.981019 systemd-logind[1699]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:27:57.982305 systemd-logind[1699]: Removed session 23. Jul 10 00:28:00.600389 containerd[1724]: time="2025-07-10T00:28:00.600343470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2217c747c61d0a20ed07b82e4a195f26ee40e82f9c2bf4233fbd93a576a286b4\" id:\"4fc6774e99e713931de5c2539b66c782f18638bbc71b34de4b9de6085df64039\" pid:6160 exited_at:{seconds:1752107280 nanos:600120549}" Jul 10 00:28:03.086529 systemd[1]: Started sshd@21-10.200.8.45:22-10.200.16.10:34466.service - OpenSSH per-connection server daemon (10.200.16.10:34466). Jul 10 00:28:04.022199 sshd[6171]: Accepted publickey for core from 10.200.16.10 port 34466 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:04.023368 sshd-session[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:04.027796 systemd-logind[1699]: New session 24 of user core. Jul 10 00:28:04.033219 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:28:04.514047 sshd[6173]: Connection closed by 10.200.16.10 port 34466 Jul 10 00:28:04.514636 sshd-session[6171]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:04.517786 systemd[1]: sshd@21-10.200.8.45:22-10.200.16.10:34466.service: Deactivated successfully. Jul 10 00:28:04.519399 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:28:04.520051 systemd-logind[1699]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:28:04.521327 systemd-logind[1699]: Removed session 24.