Jul 7 00:14:14.952652 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:14:14.952675 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:14:14.952684 kernel: BIOS-provided physical RAM map: Jul 7 00:14:14.952690 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:14:14.952696 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 7 00:14:14.952702 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 7 00:14:14.952708 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 7 00:14:14.952716 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 7 00:14:14.952722 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 7 00:14:14.952727 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 7 00:14:14.952733 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 7 00:14:14.952739 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 7 00:14:14.952745 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 7 00:14:14.952751 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 7 00:14:14.952760 kernel: NX (Execute Disable) protection: active Jul 7 00:14:14.952766 kernel: APIC: Static calls initialized Jul 7 00:14:14.952772 kernel: efi: EFI v2.7 by Microsoft Jul 7 00:14:14.952779 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Jul 7 00:14:14.952785 kernel: random: crng init done Jul 7 00:14:14.952791 kernel: secureboot: Secure boot disabled Jul 7 00:14:14.952797 kernel: SMBIOS 3.1.0 present. Jul 7 00:14:14.952804 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 7 00:14:14.952810 kernel: DMI: Memory slots populated: 2/2 Jul 7 00:14:14.952817 kernel: Hypervisor detected: Microsoft Hyper-V Jul 7 00:14:14.952824 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 7 00:14:14.952830 kernel: Hyper-V: Nested features: 0x3e0101 Jul 7 00:14:14.952836 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 7 00:14:14.952842 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 7 00:14:14.952849 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 7 00:14:14.952855 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 7 00:14:14.953911 kernel: tsc: Detected 2299.999 MHz processor Jul 7 00:14:14.953924 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:14:14.953932 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:14:14.953943 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 7 00:14:14.953950 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 00:14:14.953957 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:14:14.953964 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 7 00:14:14.953971 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 7 00:14:14.953977 kernel: Using GB pages for direct mapping Jul 7 00:14:14.953984 kernel: ACPI: Early table checksum verification disabled Jul 7 00:14:14.953993 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 7 00:14:14.954001 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:14:14.954008 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:14:14.954015 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 00:14:14.954022 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 7 00:14:14.954029 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:14:14.954036 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:14:14.954043 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:14:14.954051 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 7 00:14:14.954057 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 7 00:14:14.954064 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:14:14.954071 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 7 00:14:14.954078 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 7 00:14:14.954084 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 7 00:14:14.954091 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 7 00:14:14.954098 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 7 00:14:14.954106 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 7 00:14:14.954113 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 7 00:14:14.954120 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 7 00:14:14.954126 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 7 00:14:14.954133 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 00:14:14.954140 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 7 00:14:14.954147 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 7 00:14:14.954154 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 7 00:14:14.954161 kernel: Zone ranges: Jul 7 00:14:14.954169 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:14:14.954176 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 00:14:14.954183 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 7 00:14:14.954190 kernel: Device empty Jul 7 00:14:14.954196 kernel: Movable zone start for each node Jul 7 00:14:14.954203 kernel: Early memory node ranges Jul 7 00:14:14.954210 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 00:14:14.954217 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 7 00:14:14.954224 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 7 00:14:14.954231 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 7 00:14:14.954238 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 7 00:14:14.954245 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 7 00:14:14.954252 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:14:14.954259 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 00:14:14.954265 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 7 00:14:14.954272 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 7 00:14:14.954279 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 7 00:14:14.954286 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:14:14.954294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:14:14.954300 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:14:14.954307 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 7 00:14:14.954314 kernel: TSC deadline timer available Jul 7 00:14:14.954321 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:14:14.954328 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:14:14.954334 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:14:14.954341 kernel: CPU topo: Max. threads per core: 2 Jul 7 00:14:14.954348 kernel: CPU topo: Num. cores per package: 1 Jul 7 00:14:14.954356 kernel: CPU topo: Num. threads per package: 2 Jul 7 00:14:14.954362 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 00:14:14.954369 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 7 00:14:14.954376 kernel: Booting paravirtualized kernel on Hyper-V Jul 7 00:14:14.954382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:14:14.954389 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:14:14.954396 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 00:14:14.954403 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 00:14:14.954409 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:14:14.954417 kernel: Hyper-V: PV spinlocks enabled Jul 7 00:14:14.954424 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:14:14.954432 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:14:14.954440 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:14:14.954447 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 7 00:14:14.954453 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:14:14.954460 kernel: Fallback order for Node 0: 0 Jul 7 00:14:14.954467 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 7 00:14:14.954475 kernel: Policy zone: Normal Jul 7 00:14:14.954482 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:14:14.954488 kernel: software IO TLB: area num 2. Jul 7 00:14:14.954495 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:14:14.954502 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:14:14.954509 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:14:14.954515 kernel: Dynamic Preempt: voluntary Jul 7 00:14:14.954522 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:14:14.954530 kernel: rcu: RCU event tracing is enabled. Jul 7 00:14:14.954544 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:14:14.954551 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:14:14.954559 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:14:14.954567 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:14:14.954575 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:14:14.954582 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:14:14.954589 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:14:14.954597 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:14:14.954604 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:14:14.954611 kernel: Using NULL legacy PIC Jul 7 00:14:14.954620 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 7 00:14:14.954627 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:14:14.954634 kernel: Console: colour dummy device 80x25 Jul 7 00:14:14.954642 kernel: printk: legacy console [tty1] enabled Jul 7 00:14:14.954649 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:14:14.954656 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 7 00:14:14.954664 kernel: ACPI: Core revision 20240827 Jul 7 00:14:14.954672 kernel: Failed to register legacy timer interrupt Jul 7 00:14:14.954679 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:14:14.954687 kernel: x2apic enabled Jul 7 00:14:14.954694 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:14:14.954701 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 7 00:14:14.954709 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 00:14:14.954716 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 7 00:14:14.954724 kernel: Hyper-V: Using IPI hypercalls Jul 7 00:14:14.954731 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 7 00:14:14.954739 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 7 00:14:14.954747 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 7 00:14:14.954754 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 7 00:14:14.954761 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 7 00:14:14.954769 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 7 00:14:14.954776 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jul 7 00:14:14.954784 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Jul 7 00:14:14.954791 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 00:14:14.954800 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 00:14:14.954808 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 00:14:14.954815 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:14:14.954822 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:14:14.954829 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:14:14.954836 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 00:14:14.954844 kernel: RETBleed: Vulnerable Jul 7 00:14:14.954851 kernel: Speculative Store Bypass: Vulnerable Jul 7 00:14:14.954858 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:14:14.954874 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:14:14.954881 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:14:14.954890 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:14:14.954897 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 00:14:14.954905 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 00:14:14.954912 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 00:14:14.954919 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 7 00:14:14.954926 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 7 00:14:14.954934 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 7 00:14:14.954941 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:14:14.954948 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 7 00:14:14.954955 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 7 00:14:14.954962 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 7 00:14:14.954971 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 7 00:14:14.954978 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 7 00:14:14.954985 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 7 00:14:14.954992 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 7 00:14:14.954999 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:14:14.955006 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:14:14.955014 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:14:14.955021 kernel: landlock: Up and running. Jul 7 00:14:14.955028 kernel: SELinux: Initializing. Jul 7 00:14:14.955035 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:14:14.955043 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:14:14.955050 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 7 00:14:14.955059 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 7 00:14:14.955066 kernel: signal: max sigframe size: 11952 Jul 7 00:14:14.955074 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:14:14.955082 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:14:14.955089 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:14:14.955097 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 00:14:14.955104 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:14:14.955112 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:14:14.955119 kernel: .... node #0, CPUs: #1 Jul 7 00:14:14.955129 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:14:14.955136 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 7 00:14:14.955144 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 299988K reserved, 0K cma-reserved) Jul 7 00:14:14.955151 kernel: devtmpfs: initialized Jul 7 00:14:14.955159 kernel: x86/mm: Memory block size: 128MB Jul 7 00:14:14.955166 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 7 00:14:14.955174 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:14:14.955181 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:14:14.955188 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:14:14.955198 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:14:14.955205 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:14:14.955212 kernel: audit: type=2000 audit(1751847252.028:1): state=initialized audit_enabled=0 res=1 Jul 7 00:14:14.955219 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:14:14.955227 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:14:14.955234 kernel: cpuidle: using governor menu Jul 7 00:14:14.955241 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:14:14.955249 kernel: dca service started, version 1.12.1 Jul 7 00:14:14.955256 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 7 00:14:14.955265 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 7 00:14:14.955273 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:14:14.955280 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:14:14.955288 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:14:14.955295 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:14:14.955303 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:14:14.955310 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:14:14.955317 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:14:14.955326 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:14:14.955334 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:14:14.955341 kernel: ACPI: Interpreter enabled Jul 7 00:14:14.955348 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:14:14.955356 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:14:14.955363 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:14:14.955371 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 00:14:14.955378 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 7 00:14:14.955386 kernel: iommu: Default domain type: Translated Jul 7 00:14:14.955393 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:14:14.955403 kernel: efivars: Registered efivars operations Jul 7 00:14:14.955410 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:14:14.955417 kernel: PCI: System does not support PCI Jul 7 00:14:14.955424 kernel: vgaarb: loaded Jul 7 00:14:14.955432 kernel: clocksource: Switched to clocksource tsc-early Jul 7 00:14:14.955439 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:14:14.955446 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:14:14.955454 kernel: pnp: PnP ACPI init Jul 7 00:14:14.955461 kernel: pnp: PnP ACPI: found 3 devices Jul 7 00:14:14.955470 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:14:14.955477 kernel: NET: Registered PF_INET protocol family Jul 7 00:14:14.955485 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:14:14.955492 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 7 00:14:14.955499 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:14:14.955507 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:14:14.955514 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:14:14.955521 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 7 00:14:14.955530 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 00:14:14.955538 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 00:14:14.955545 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:14:14.955552 kernel: NET: Registered PF_XDP protocol family Jul 7 00:14:14.955560 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:14:14.955567 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 00:14:14.955574 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Jul 7 00:14:14.955582 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 7 00:14:14.955589 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 7 00:14:14.955598 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jul 7 00:14:14.955605 kernel: clocksource: Switched to clocksource tsc Jul 7 00:14:14.955613 kernel: Initialise system trusted keyrings Jul 7 00:14:14.955620 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 7 00:14:14.955627 kernel: Key type asymmetric registered Jul 7 00:14:14.955634 kernel: Asymmetric key parser 'x509' registered Jul 7 00:14:14.955642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:14:14.955649 kernel: io scheduler mq-deadline registered Jul 7 00:14:14.955657 kernel: io scheduler kyber registered Jul 7 00:14:14.955665 kernel: io scheduler bfq registered Jul 7 00:14:14.955672 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:14:14.955680 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:14:14.955687 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:14:14.955694 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 00:14:14.955702 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:14:14.955709 kernel: i8042: PNP: No PS/2 controller found. Jul 7 00:14:14.955827 kernel: rtc_cmos 00:02: registered as rtc0 Jul 7 00:14:14.956734 kernel: rtc_cmos 00:02: setting system clock to 2025-07-07T00:14:14 UTC (1751847254) Jul 7 00:14:14.956812 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 7 00:14:14.956822 kernel: intel_pstate: Intel P-state driver initializing Jul 7 00:14:14.956830 kernel: efifb: probing for efifb Jul 7 00:14:14.956838 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 00:14:14.956846 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 00:14:14.956855 kernel: efifb: scrolling: redraw Jul 7 00:14:14.956918 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:14:14.956927 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 00:14:14.956938 kernel: fb0: EFI VGA frame buffer device Jul 7 00:14:14.956945 kernel: pstore: Using crash dump compression: deflate Jul 7 00:14:14.956953 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:14:14.956960 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:14:14.956968 kernel: Segment Routing with IPv6 Jul 7 00:14:14.956976 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:14:14.956983 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:14:14.956991 kernel: Key type dns_resolver registered Jul 7 00:14:14.956999 kernel: IPI shorthand broadcast: enabled Jul 7 00:14:14.957009 kernel: sched_clock: Marking stable (2703003466, 83842241)->(3071898555, -285052848) Jul 7 00:14:14.957016 kernel: registered taskstats version 1 Jul 7 00:14:14.957024 kernel: Loading compiled-in X.509 certificates Jul 7 00:14:14.957031 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:14:14.957038 kernel: Demotion targets for Node 0: null Jul 7 00:14:14.957046 kernel: Key type .fscrypt registered Jul 7 00:14:14.957053 kernel: Key type fscrypt-provisioning registered Jul 7 00:14:14.957060 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:14:14.957067 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:14:14.957076 kernel: ima: No architecture policies found Jul 7 00:14:14.957084 kernel: clk: Disabling unused clocks Jul 7 00:14:14.957091 kernel: Warning: unable to open an initial console. Jul 7 00:14:14.957099 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:14:14.957106 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:14:14.957114 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:14:14.957121 kernel: Run /init as init process Jul 7 00:14:14.957129 kernel: with arguments: Jul 7 00:14:14.957136 kernel: /init Jul 7 00:14:14.957145 kernel: with environment: Jul 7 00:14:14.957153 kernel: HOME=/ Jul 7 00:14:14.957160 kernel: TERM=linux Jul 7 00:14:14.957167 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:14:14.957176 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:14:14.957187 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:14:14.957196 systemd[1]: Detected virtualization microsoft. Jul 7 00:14:14.957205 systemd[1]: Detected architecture x86-64. Jul 7 00:14:14.957213 systemd[1]: Running in initrd. Jul 7 00:14:14.957220 systemd[1]: No hostname configured, using default hostname. Jul 7 00:14:14.957228 systemd[1]: Hostname set to . Jul 7 00:14:14.957236 systemd[1]: Initializing machine ID from random generator. Jul 7 00:14:14.957244 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:14:14.957252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:14:14.957260 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:14:14.957270 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:14:14.957278 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:14:14.957286 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:14:14.957295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:14:14.957304 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:14:14.957312 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:14:14.957320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:14:14.957330 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:14:14.957338 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:14:14.957346 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:14:14.957354 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:14:14.957362 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:14:14.957370 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:14:14.957378 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:14:14.957386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:14:14.957394 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:14:14.957403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:14:14.957411 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:14:14.957419 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:14:14.957427 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:14:14.957435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:14:14.957443 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:14:14.957451 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:14:14.957459 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:14:14.957469 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:14:14.957477 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:14:14.957486 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:14:14.957503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:14.957512 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:14:14.957523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:14:14.957531 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:14:14.957554 systemd-journald[205]: Collecting audit messages is disabled. Jul 7 00:14:14.957585 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:14:14.957596 systemd-journald[205]: Journal started Jul 7 00:14:14.957614 systemd-journald[205]: Runtime Journal (/run/log/journal/f47ab8879a1f4407bbc2c7dae3954d54) is 8M, max 158.9M, 150.9M free. Jul 7 00:14:14.960411 systemd-modules-load[206]: Inserted module 'overlay' Jul 7 00:14:14.963880 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:14:14.965668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:14:14.974689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:14.980286 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:14:14.988957 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:14:14.996943 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:14:14.994945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:14:14.995575 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:14:15.006093 kernel: Bridge firewalling registered Jul 7 00:14:15.002485 systemd-modules-load[206]: Inserted module 'br_netfilter' Jul 7 00:14:15.002965 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:14:15.007566 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:14:15.016944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:14:15.022972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:14:15.029618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:14:15.035029 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:14:15.038407 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:14:15.046964 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:14:15.058550 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:14:15.084664 systemd-resolved[246]: Positive Trust Anchors: Jul 7 00:14:15.085882 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:14:15.088562 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:14:15.098177 systemd-resolved[246]: Defaulting to hostname 'linux'. Jul 7 00:14:15.098915 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:14:15.108977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:14:15.126875 kernel: SCSI subsystem initialized Jul 7 00:14:15.133875 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:14:15.141874 kernel: iscsi: registered transport (tcp) Jul 7 00:14:15.157249 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:14:15.157284 kernel: QLogic iSCSI HBA Driver Jul 7 00:14:15.168410 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:14:15.182735 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:14:15.186206 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:14:15.213911 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:14:15.216963 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:14:15.254877 kernel: raid6: avx512x4 gen() 47093 MB/s Jul 7 00:14:15.271874 kernel: raid6: avx512x2 gen() 45855 MB/s Jul 7 00:14:15.288870 kernel: raid6: avx512x1 gen() 30036 MB/s Jul 7 00:14:15.305870 kernel: raid6: avx2x4 gen() 41794 MB/s Jul 7 00:14:15.323872 kernel: raid6: avx2x2 gen() 43937 MB/s Jul 7 00:14:15.341956 kernel: raid6: avx2x1 gen() 30093 MB/s Jul 7 00:14:15.341969 kernel: raid6: using algorithm avx512x4 gen() 47093 MB/s Jul 7 00:14:15.360228 kernel: raid6: .... xor() 7829 MB/s, rmw enabled Jul 7 00:14:15.360307 kernel: raid6: using avx512x2 recovery algorithm Jul 7 00:14:15.376877 kernel: xor: automatically using best checksumming function avx Jul 7 00:14:15.477876 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:14:15.481937 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:14:15.485123 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:14:15.503587 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 7 00:14:15.507031 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:14:15.512728 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:14:15.525846 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 7 00:14:15.541044 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:14:15.542968 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:14:15.578947 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:14:15.585830 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:14:15.619883 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:14:15.628881 kernel: AES CTR mode by8 optimization enabled Jul 7 00:14:15.643475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:14:15.643792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:15.650490 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:15.656616 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 00:14:15.656564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:15.671424 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 00:14:15.671452 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 00:14:15.678329 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:14:15.687783 kernel: PTP clock support registered Jul 7 00:14:15.678399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:15.680992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:15.698035 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 00:14:15.698066 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 00:14:15.701915 kernel: hv_vmbus: registering driver hv_pci Jul 7 00:14:15.706876 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 7 00:14:15.723876 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:14:15.729476 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 00:14:15.729507 kernel: hv_vmbus: registering driver hv_utils Jul 7 00:14:15.731181 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 00:14:15.731983 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 7 00:14:15.738186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:16.213376 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 00:14:16.213921 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd18d06 (unnamed net_device) (uninitialized): VF slot 1 added Jul 7 00:14:16.214426 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 00:14:16.214500 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 00:14:16.214513 kernel: scsi host0: storvsc_host_t Jul 7 00:14:16.214668 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 7 00:14:16.214776 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 7 00:14:16.214898 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 00:14:16.214960 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 7 00:14:16.214973 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 7 00:14:16.203006 systemd-resolved[246]: Clock change detected. Flushing caches. Jul 7 00:14:16.218590 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 7 00:14:16.226545 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 00:14:16.230885 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 7 00:14:16.233531 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 00:14:16.236797 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 7 00:14:16.242117 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 00:14:16.242262 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:14:16.242272 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 00:14:16.244655 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 7 00:14:16.244789 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 00:14:16.259694 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 7 00:14:16.259879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#78 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:14:16.261261 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 7 00:14:16.276836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#115 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:14:16.523623 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 00:14:16.528548 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:14:16.784593 kernel: nvme nvme0: using unchecked data buffer Jul 7 00:14:17.008875 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 7 00:14:17.019160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 7 00:14:17.032489 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 7 00:14:17.049427 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 7 00:14:17.050188 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 7 00:14:17.055788 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:14:17.063676 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:14:17.064233 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:14:17.071165 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:14:17.073909 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:14:17.082639 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:14:17.093246 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:14:17.098643 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:14:17.108564 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:14:17.214453 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 7 00:14:17.214602 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 7 00:14:17.216952 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 7 00:14:17.218455 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 00:14:17.223532 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 7 00:14:17.225532 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 7 00:14:17.229596 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 7 00:14:17.229618 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 7 00:14:17.243642 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 00:14:17.243799 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 7 00:14:17.247607 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 7 00:14:17.251439 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 7 00:14:17.259549 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 7 00:14:17.261876 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd18d06 eth0: VF registering: eth1 Jul 7 00:14:17.262028 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 7 00:14:17.265547 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 7 00:14:18.106093 disk-uuid[681]: The operation has completed successfully. Jul 7 00:14:18.108026 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:14:18.146384 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:14:18.146463 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:14:18.180851 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:14:18.202228 sh[722]: Success Jul 7 00:14:18.230896 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:14:18.230933 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:14:18.230946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:14:18.240546 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 00:14:18.446951 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:14:18.451112 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:14:18.467180 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:14:18.477540 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:14:18.477575 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (735) Jul 7 00:14:18.479975 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:14:18.481185 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:14:18.482666 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:14:18.743178 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:14:18.744746 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:14:18.746877 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:14:18.748624 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:14:18.753043 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:14:18.781608 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (768) Jul 7 00:14:18.787584 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:18.787617 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:14:18.787631 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:14:18.810440 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:14:18.812293 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:18.814890 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:14:18.829678 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:14:18.833987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:14:18.859769 systemd-networkd[904]: lo: Link UP Jul 7 00:14:18.859777 systemd-networkd[904]: lo: Gained carrier Jul 7 00:14:18.861149 systemd-networkd[904]: Enumeration completed Jul 7 00:14:18.861209 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:14:18.869319 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 7 00:14:18.869907 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 00:14:18.870045 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd18d06 eth0: Data path switched to VF: enP30832s1 Jul 7 00:14:18.862242 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:14:18.862245 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:14:18.864776 systemd[1]: Reached target network.target - Network. Jul 7 00:14:18.871216 systemd-networkd[904]: enP30832s1: Link UP Jul 7 00:14:18.871359 systemd-networkd[904]: eth0: Link UP Jul 7 00:14:18.871754 systemd-networkd[904]: eth0: Gained carrier Jul 7 00:14:18.871764 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:14:18.878779 systemd-networkd[904]: enP30832s1: Gained carrier Jul 7 00:14:18.888561 systemd-networkd[904]: eth0: DHCPv4 address 10.200.4.38/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 00:14:20.046965 ignition[889]: Ignition 2.21.0 Jul 7 00:14:20.046975 ignition[889]: Stage: fetch-offline Jul 7 00:14:20.048326 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:14:20.047055 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:20.050765 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:14:20.047061 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:14:20.047148 ignition[889]: parsed url from cmdline: "" Jul 7 00:14:20.047150 ignition[889]: no config URL provided Jul 7 00:14:20.047154 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:14:20.047159 ignition[889]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:14:20.047164 ignition[889]: failed to fetch config: resource requires networking Jul 7 00:14:20.047362 ignition[889]: Ignition finished successfully Jul 7 00:14:20.070297 ignition[913]: Ignition 2.21.0 Jul 7 00:14:20.070301 ignition[913]: Stage: fetch Jul 7 00:14:20.070456 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:20.070461 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:14:20.070509 ignition[913]: parsed url from cmdline: "" Jul 7 00:14:20.070511 ignition[913]: no config URL provided Jul 7 00:14:20.070514 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:14:20.070517 ignition[913]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:14:20.070618 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 00:14:20.121280 ignition[913]: GET result: OK Jul 7 00:14:20.121439 ignition[913]: config has been read from IMDS userdata Jul 7 00:14:20.121612 ignition[913]: parsing config with SHA512: 26c41da7d9b5f7c4025b424d38e27080d8c43db20043561eed14e2dbb803275bf06fbe4e473fc016a22714f11525034845f5c5ebb0f33abc7c1d029b0b993acd Jul 7 00:14:20.127951 unknown[913]: fetched base config from "system" Jul 7 00:14:20.127959 unknown[913]: fetched base config from "system" Jul 7 00:14:20.128257 ignition[913]: fetch: fetch complete Jul 7 00:14:20.127964 unknown[913]: fetched user config from "azure" Jul 7 00:14:20.128261 ignition[913]: fetch: fetch passed Jul 7 00:14:20.129998 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:14:20.128290 ignition[913]: Ignition finished successfully Jul 7 00:14:20.138425 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:14:20.165317 ignition[920]: Ignition 2.21.0 Jul 7 00:14:20.165326 ignition[920]: Stage: kargs Jul 7 00:14:20.165479 ignition[920]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:20.166945 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:14:20.165485 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:14:20.170629 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:14:20.166104 ignition[920]: kargs: kargs passed Jul 7 00:14:20.166131 ignition[920]: Ignition finished successfully Jul 7 00:14:20.194897 ignition[926]: Ignition 2.21.0 Jul 7 00:14:20.194906 ignition[926]: Stage: disks Jul 7 00:14:20.196382 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:14:20.195054 ignition[926]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:20.197392 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:14:20.195060 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:14:20.197457 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:14:20.195660 ignition[926]: disks: disks passed Jul 7 00:14:20.197489 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:14:20.195687 ignition[926]: Ignition finished successfully Jul 7 00:14:20.197757 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:14:20.197774 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:14:20.198361 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:14:20.270812 systemd-fsck[934]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 7 00:14:20.275088 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:14:20.280509 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:14:20.343633 systemd-networkd[904]: enP30832s1: Gained IPv6LL Jul 7 00:14:20.539538 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:14:20.539643 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:14:20.540636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:14:20.565284 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:14:20.574597 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:14:20.578017 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:14:20.580170 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:14:20.580194 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:14:20.589356 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:14:20.598635 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (943) Jul 7 00:14:20.598652 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:20.598663 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:14:20.598670 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:14:20.599591 systemd-networkd[904]: eth0: Gained IPv6LL Jul 7 00:14:20.602639 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:14:20.606144 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:14:21.037064 coreos-metadata[945]: Jul 07 00:14:21.037 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 00:14:21.040604 coreos-metadata[945]: Jul 07 00:14:21.039 INFO Fetch successful Jul 7 00:14:21.040604 coreos-metadata[945]: Jul 07 00:14:21.039 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 00:14:21.046204 coreos-metadata[945]: Jul 07 00:14:21.046 INFO Fetch successful Jul 7 00:14:21.060478 coreos-metadata[945]: Jul 07 00:14:21.060 INFO wrote hostname ci-4344.1.1-a-19494a3847 to /sysroot/etc/hostname Jul 7 00:14:21.063362 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:14:21.309835 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:14:21.343015 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:14:21.377491 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:14:21.381173 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:14:22.250201 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:14:22.254556 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:14:22.260625 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:14:22.268909 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:14:22.271163 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:22.287434 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:14:22.291534 ignition[1066]: INFO : Ignition 2.21.0 Jul 7 00:14:22.293098 ignition[1066]: INFO : Stage: mount Jul 7 00:14:22.293098 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:22.293098 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:14:22.293098 ignition[1066]: INFO : mount: mount passed Jul 7 00:14:22.293098 ignition[1066]: INFO : Ignition finished successfully Jul 7 00:14:22.293167 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:14:22.295069 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:14:22.319811 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:14:22.337622 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1077) Jul 7 00:14:22.337731 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:14:22.338620 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:14:22.339890 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:14:22.343476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:14:22.365150 ignition[1093]: INFO : Ignition 2.21.0 Jul 7 00:14:22.365150 ignition[1093]: INFO : Stage: files Jul 7 00:14:22.368208 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:22.368208 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:14:22.368208 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:14:22.381625 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:14:22.381625 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:14:22.432865 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:14:22.436570 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:14:22.436570 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:14:22.433132 unknown[1093]: wrote ssh authorized keys file for user: core Jul 7 00:14:22.450630 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:14:22.450630 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:14:22.712287 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:14:22.808819 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:14:22.808819 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:14:22.815903 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 00:14:23.298432 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:14:23.340089 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:14:23.342339 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:14:23.345606 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:14:23.345606 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:14:23.345606 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:14:23.345606 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:14:23.345606 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:14:23.345606 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:14:23.345606 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:14:23.367551 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:14:23.367551 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:14:23.367551 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:14:23.367551 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:14:23.367551 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:14:23.367551 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:14:24.136535 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:14:24.282944 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:14:24.282944 ignition[1093]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:14:24.320218 ignition[1093]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:14:24.326645 ignition[1093]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:14:24.326645 ignition[1093]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:14:24.326645 ignition[1093]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:14:24.332194 ignition[1093]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:14:24.332194 ignition[1093]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:14:24.332194 ignition[1093]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:14:24.332194 ignition[1093]: INFO : files: files passed Jul 7 00:14:24.332194 ignition[1093]: INFO : Ignition finished successfully Jul 7 00:14:24.330256 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:14:24.343030 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:14:24.352032 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:14:24.357040 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:14:24.358799 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:14:24.398404 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:14:24.398404 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:14:24.405991 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:14:24.401665 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:14:24.404119 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:14:24.409173 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:14:24.442650 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:14:24.442734 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:14:24.446837 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:14:24.447120 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:14:24.447221 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:14:24.448618 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:14:24.464501 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:14:24.466626 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:14:24.480834 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:14:24.481451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:14:24.492509 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:14:24.497656 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:14:24.497777 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:14:24.501828 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:14:24.502358 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:14:24.502635 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:14:24.502890 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:14:24.503201 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:14:24.503466 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:14:24.503745 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:14:24.504007 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:14:24.504539 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:14:24.505034 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:14:24.505547 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:14:24.505778 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:14:24.505876 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:14:24.518623 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:14:24.521158 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:14:24.526068 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:14:24.526447 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:14:24.528476 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:14:24.528584 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:14:24.547675 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:14:24.547843 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:14:24.548127 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:14:24.548225 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:14:24.552931 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:14:24.553043 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:14:24.560904 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:14:24.564171 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:14:24.564317 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:14:24.570631 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:14:24.575589 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:14:24.575710 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:14:24.577913 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:14:24.591595 ignition[1148]: INFO : Ignition 2.21.0 Jul 7 00:14:24.591595 ignition[1148]: INFO : Stage: umount Jul 7 00:14:24.591595 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:24.591595 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:14:24.578027 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:14:24.595585 ignition[1148]: INFO : umount: umount passed Jul 7 00:14:24.595585 ignition[1148]: INFO : Ignition finished successfully Jul 7 00:14:24.585855 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:14:24.586032 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:14:24.593347 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:14:24.593435 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:14:24.601786 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:14:24.601829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:14:24.603493 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:14:24.603533 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:14:24.606204 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:14:24.606248 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:14:24.613973 systemd[1]: Stopped target network.target - Network. Jul 7 00:14:24.616690 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:14:24.617444 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:14:24.626435 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:14:24.628466 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:14:24.629663 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:14:24.632904 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:14:24.635085 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:14:24.635335 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:14:24.635369 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:14:24.639981 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:14:24.640016 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:14:24.644281 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:14:24.644334 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:14:24.648587 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:14:24.648628 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:14:24.650905 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:14:24.654614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:14:24.660460 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:14:24.662145 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:14:24.662229 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:14:24.666071 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:14:24.666225 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:14:24.666306 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:14:24.671633 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:14:24.671953 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:14:24.681595 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:14:24.681634 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:14:24.685608 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:14:24.685886 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:14:24.685931 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:14:24.686190 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:14:24.686219 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:14:24.692392 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:14:24.692436 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:14:24.693579 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:14:24.693618 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:14:24.703810 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:14:24.713257 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:14:24.713314 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:14:24.723545 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd18d06 eth0: Data path switched from VF: enP30832s1 Jul 7 00:14:24.723691 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 00:14:24.723234 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:14:24.723367 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:14:24.726249 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:14:24.726283 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:14:24.726349 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:14:24.726367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:14:24.726393 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:14:24.726421 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:14:24.732575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:14:24.732622 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:14:24.733354 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:14:24.733382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:14:24.735627 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:14:24.740563 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:14:24.740611 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:14:24.743785 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:14:24.743823 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:14:24.748613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:14:24.748657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:24.759115 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:14:24.759158 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:14:24.759186 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:14:24.759405 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:14:24.759469 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:14:24.759923 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:14:24.759984 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:14:24.825267 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:14:24.825344 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:14:24.826393 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:14:24.826561 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:14:24.826598 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:14:24.827277 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:14:24.841827 systemd[1]: Switching root. Jul 7 00:14:24.886029 systemd-journald[205]: Journal stopped Jul 7 00:14:29.077049 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 7 00:14:29.077075 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:14:29.077085 kernel: SELinux: policy capability open_perms=1 Jul 7 00:14:29.077093 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:14:29.077100 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:14:29.077107 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:14:29.077117 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:14:29.077124 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:14:29.077131 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:14:29.077139 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:14:29.077146 kernel: audit: type=1403 audit(1751847266.346:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:14:29.077154 systemd[1]: Successfully loaded SELinux policy in 129.796ms. Jul 7 00:14:29.077164 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.825ms. Jul 7 00:14:29.077175 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:14:29.077184 systemd[1]: Detected virtualization microsoft. Jul 7 00:14:29.077193 systemd[1]: Detected architecture x86-64. Jul 7 00:14:29.077201 systemd[1]: Detected first boot. Jul 7 00:14:29.077209 systemd[1]: Hostname set to . Jul 7 00:14:29.077218 systemd[1]: Initializing machine ID from random generator. Jul 7 00:14:29.077226 zram_generator::config[1192]: No configuration found. Jul 7 00:14:29.077235 kernel: Guest personality initialized and is inactive Jul 7 00:14:29.077243 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 7 00:14:29.077250 kernel: Initialized host personality Jul 7 00:14:29.077258 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:14:29.077266 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:14:29.077276 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:14:29.077284 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:14:29.077293 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:14:29.077301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:14:29.077310 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:14:29.077319 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:14:29.077327 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:14:29.077336 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:14:29.077345 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:14:29.077353 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:14:29.077361 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:14:29.077369 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:14:29.077378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:14:29.077387 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:14:29.077395 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:14:29.077406 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:14:29.077415 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:14:29.077425 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:14:29.077433 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:14:29.077441 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:14:29.077450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:14:29.077459 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:14:29.077467 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:14:29.077478 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:14:29.077486 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:14:29.077495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:14:29.077503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:14:29.077511 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:14:29.077528 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:14:29.077539 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:14:29.077547 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:14:29.077558 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:14:29.077566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:14:29.077575 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:14:29.077583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:14:29.077592 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:14:29.077602 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:14:29.077610 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:14:29.077619 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:14:29.077628 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:14:29.077636 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:14:29.077644 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:14:29.077652 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:14:29.077660 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:14:29.077671 systemd[1]: Reached target machines.target - Containers. Jul 7 00:14:29.077680 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:14:29.077707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:14:29.077716 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:14:29.077725 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:14:29.077733 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:14:29.077741 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:14:29.077750 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:14:29.077758 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:14:29.077768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:14:29.077777 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:14:29.077786 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:14:29.077794 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:14:29.077803 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:14:29.077811 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:14:29.077820 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:14:29.077828 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:14:29.077838 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:14:29.077847 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:14:29.077855 kernel: fuse: init (API version 7.41) Jul 7 00:14:29.077864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:14:29.077873 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:14:29.077881 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:14:29.077889 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:14:29.077896 systemd[1]: Stopped verity-setup.service. Jul 7 00:14:29.077906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:14:29.077914 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:14:29.077921 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:14:29.077929 kernel: loop: module loaded Jul 7 00:14:29.077936 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:14:29.077944 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:14:29.077953 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:14:29.077976 systemd-journald[1275]: Collecting audit messages is disabled. Jul 7 00:14:29.077996 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:14:29.078006 systemd-journald[1275]: Journal started Jul 7 00:14:29.078027 systemd-journald[1275]: Runtime Journal (/run/log/journal/2b1532d478e04eeaa18e3579575e65e6) is 8M, max 158.9M, 150.9M free. Jul 7 00:14:28.666858 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:14:29.080944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:14:28.674883 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 00:14:28.675172 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:14:29.087001 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:14:29.089011 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:14:29.089211 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:14:29.092051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:14:29.092271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:14:29.094909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:14:29.095288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:14:29.099870 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:14:29.100173 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:14:29.102589 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:14:29.102791 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:14:29.105122 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:14:29.107794 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:14:29.110616 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:14:29.113478 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:14:29.116280 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:14:29.125119 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:14:29.130597 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:14:29.133515 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:14:29.137120 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:14:29.137149 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:14:29.139908 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:14:29.147034 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:14:29.149205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:14:29.165053 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:14:29.170755 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:14:29.172633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:14:29.179432 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:14:29.183786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:14:29.184553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:14:29.186546 kernel: ACPI: bus type drm_connector registered Jul 7 00:14:29.187736 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:14:29.193170 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:14:29.197919 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:14:29.198644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:14:29.201418 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:14:29.203820 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:14:29.206837 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:14:29.212951 systemd-journald[1275]: Time spent on flushing to /var/log/journal/2b1532d478e04eeaa18e3579575e65e6 is 13.188ms for 990 entries. Jul 7 00:14:29.212951 systemd-journald[1275]: System Journal (/var/log/journal/2b1532d478e04eeaa18e3579575e65e6) is 8M, max 2.6G, 2.6G free. Jul 7 00:14:29.242661 systemd-journald[1275]: Received client request to flush runtime journal. Jul 7 00:14:29.228759 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:14:29.232745 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:14:29.237628 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:14:29.244025 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:14:29.267541 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 00:14:29.278943 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:14:29.283629 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:14:29.290422 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:14:29.295747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:14:29.345212 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Jul 7 00:14:29.345227 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Jul 7 00:14:29.347688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:14:29.623540 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:14:29.638638 kernel: loop1: detected capacity change from 0 to 224512 Jul 7 00:14:29.676308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:14:29.682536 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 00:14:30.162701 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:14:30.165254 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:14:30.192161 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Jul 7 00:14:30.203546 kernel: loop3: detected capacity change from 0 to 28496 Jul 7 00:14:30.296582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:14:30.303653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:14:30.350180 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:14:30.376670 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:14:30.414540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#274 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:14:30.431543 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 00:14:30.435548 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:14:30.441548 kernel: hv_vmbus: registering driver hv_balloon Jul 7 00:14:30.448004 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 00:14:30.448061 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 00:14:30.451548 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 00:14:30.453112 kernel: Console: switching to colour dummy device 80x25 Jul 7 00:14:30.459210 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 00:14:30.512602 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:14:30.547165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:30.554052 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:14:30.554198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:30.559766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:30.653996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:14:30.654561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:30.659399 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:14:30.660357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:30.720747 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 7 00:14:30.725274 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:14:30.749888 kernel: loop4: detected capacity change from 0 to 146240 Jul 7 00:14:30.757561 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 7 00:14:30.771537 kernel: loop5: detected capacity change from 0 to 224512 Jul 7 00:14:30.780375 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:14:30.787541 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 00:14:30.796912 systemd-networkd[1369]: lo: Link UP Jul 7 00:14:30.796918 systemd-networkd[1369]: lo: Gained carrier Jul 7 00:14:30.799091 systemd-networkd[1369]: Enumeration completed Jul 7 00:14:30.799379 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:14:30.799383 systemd-networkd[1369]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:14:30.799665 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:14:30.801537 kernel: loop7: detected capacity change from 0 to 28496 Jul 7 00:14:30.802951 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 7 00:14:30.803634 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:14:30.805627 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:14:30.810540 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 00:14:30.814666 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd18d06 eth0: Data path switched to VF: enP30832s1 Jul 7 00:14:30.815891 systemd-networkd[1369]: enP30832s1: Link UP Jul 7 00:14:30.816135 systemd-networkd[1369]: eth0: Link UP Jul 7 00:14:30.816140 systemd-networkd[1369]: eth0: Gained carrier Jul 7 00:14:30.816154 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:14:30.819961 (sd-merge)[1450]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 00:14:30.820554 (sd-merge)[1450]: Merged extensions into '/usr'. Jul 7 00:14:30.828721 systemd-networkd[1369]: enP30832s1: Gained carrier Jul 7 00:14:30.830325 systemd[1]: Reload requested from client PID 1332 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:14:30.830395 systemd[1]: Reloading... Jul 7 00:14:30.842558 systemd-networkd[1369]: eth0: DHCPv4 address 10.200.4.38/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 00:14:30.884706 zram_generator::config[1499]: No configuration found. Jul 7 00:14:30.951600 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:14:31.032125 systemd[1]: Reloading finished in 201 ms. Jul 7 00:14:31.058957 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:14:31.060703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:31.063756 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:14:31.075275 systemd[1]: Starting ensure-sysext.service... Jul 7 00:14:31.078637 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:14:31.098980 systemd[1]: Reload requested from client PID 1548 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:14:31.098997 systemd[1]: Reloading... Jul 7 00:14:31.104433 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:14:31.104462 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:14:31.104949 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:14:31.105213 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:14:31.105921 systemd-tmpfiles[1549]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:14:31.106151 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Jul 7 00:14:31.106207 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Jul 7 00:14:31.108917 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:14:31.109004 systemd-tmpfiles[1549]: Skipping /boot Jul 7 00:14:31.114204 systemd-tmpfiles[1549]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:14:31.114272 systemd-tmpfiles[1549]: Skipping /boot Jul 7 00:14:31.164563 zram_generator::config[1580]: No configuration found. Jul 7 00:14:31.238106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:14:31.317931 systemd[1]: Reloading finished in 218 ms. Jul 7 00:14:31.333143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:14:31.341318 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:14:31.344944 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:14:31.352723 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:14:31.357737 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:14:31.360662 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:14:31.365375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:14:31.365535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:14:31.366831 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:14:31.374187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:14:31.380156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:14:31.382422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:14:31.382543 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:14:31.382626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:14:31.386187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:14:31.386340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:14:31.395335 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:14:31.395657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:14:31.400754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:14:31.402862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:14:31.403020 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:14:31.403137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:14:31.408882 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:14:31.411723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:14:31.414877 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:14:31.418272 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:14:31.418415 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:14:31.421419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:14:31.421587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:14:31.429599 systemd[1]: Finished ensure-sysext.service. Jul 7 00:14:31.434346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:14:31.434915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:14:31.435684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:14:31.439762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:14:31.439799 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:14:31.439831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:14:31.439867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:14:31.439894 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:14:31.450617 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:14:31.450908 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:14:31.455710 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:14:31.455811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:14:31.477437 systemd-resolved[1643]: Positive Trust Anchors: Jul 7 00:14:31.477445 systemd-resolved[1643]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:14:31.477475 systemd-resolved[1643]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:14:31.481053 systemd-resolved[1643]: Using system hostname 'ci-4344.1.1-a-19494a3847'. Jul 7 00:14:31.482047 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:14:31.484640 systemd[1]: Reached target network.target - Network. Jul 7 00:14:31.485610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:14:31.507408 augenrules[1678]: No rules Jul 7 00:14:31.508133 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:14:31.508277 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:14:31.589578 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:14:31.593747 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:14:32.503664 systemd-networkd[1369]: enP30832s1: Gained IPv6LL Jul 7 00:14:32.823626 systemd-networkd[1369]: eth0: Gained IPv6LL Jul 7 00:14:32.825763 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:14:32.827467 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:14:34.494034 ldconfig[1327]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:14:34.516211 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:14:34.519606 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:14:34.535632 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:14:34.538728 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:14:34.541645 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:14:34.544575 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:14:34.547562 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:14:34.548957 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:14:34.550275 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:14:34.553571 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:14:34.555084 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:14:34.555111 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:14:34.557560 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:14:34.561805 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:14:34.565400 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:14:34.570349 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:14:34.573686 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:14:34.576583 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:14:34.580805 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:14:34.583839 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:14:34.587075 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:14:34.591149 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:14:34.593568 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:14:34.594603 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:14:34.594622 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:14:34.596333 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 00:14:34.599862 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:14:34.605195 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:14:34.609542 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:14:34.613669 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:14:34.619735 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:14:34.622600 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:14:34.624813 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:14:34.626621 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:14:34.628881 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 7 00:14:34.630688 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 00:14:34.632677 jq[1696]: false Jul 7 00:14:34.632890 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 00:14:34.637631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:34.641803 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:14:34.645895 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:14:34.649854 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:14:34.653680 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:14:34.658744 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:14:34.666608 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:14:34.670261 KVP[1702]: KVP starting; pid is:1702 Jul 7 00:14:34.670265 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:14:34.671912 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:14:34.672460 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:14:34.680424 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:14:34.687584 google_oslogin_nss_cache[1701]: oslogin_cache_refresh[1701]: Refreshing passwd entry cache Jul 7 00:14:34.687267 oslogin_cache_refresh[1701]: Refreshing passwd entry cache Jul 7 00:14:34.693550 kernel: hv_utils: KVP IC version 4.0 Jul 7 00:14:34.692554 KVP[1702]: KVP LIC Version: 3.1 Jul 7 00:14:34.689161 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:14:34.693816 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:14:34.694075 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:14:34.695346 (chronyd)[1691]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 00:14:34.703773 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:14:34.706458 google_oslogin_nss_cache[1701]: oslogin_cache_refresh[1701]: Failure getting users, quitting Jul 7 00:14:34.706458 google_oslogin_nss_cache[1701]: oslogin_cache_refresh[1701]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:14:34.706458 google_oslogin_nss_cache[1701]: oslogin_cache_refresh[1701]: Refreshing group entry cache Jul 7 00:14:34.706110 oslogin_cache_refresh[1701]: Failure getting users, quitting Jul 7 00:14:34.706129 oslogin_cache_refresh[1701]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:14:34.706158 oslogin_cache_refresh[1701]: Refreshing group entry cache Jul 7 00:14:34.706725 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:14:34.717327 jq[1714]: true Jul 7 00:14:34.717359 chronyd[1732]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 00:14:34.719461 extend-filesystems[1700]: Found /dev/nvme0n1p6 Jul 7 00:14:34.725266 google_oslogin_nss_cache[1701]: oslogin_cache_refresh[1701]: Failure getting groups, quitting Jul 7 00:14:34.725266 google_oslogin_nss_cache[1701]: oslogin_cache_refresh[1701]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:14:34.725047 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:14:34.724178 oslogin_cache_refresh[1701]: Failure getting groups, quitting Jul 7 00:14:34.725206 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:14:34.724187 oslogin_cache_refresh[1701]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:14:34.736627 extend-filesystems[1700]: Found /dev/nvme0n1p9 Jul 7 00:14:34.738138 chronyd[1732]: Timezone right/UTC failed leap second check, ignoring Jul 7 00:14:34.738270 chronyd[1732]: Loaded seccomp filter (level 2) Jul 7 00:14:34.740553 jq[1735]: true Jul 7 00:14:34.745037 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 00:14:34.745438 extend-filesystems[1700]: Checking size of /dev/nvme0n1p9 Jul 7 00:14:34.750917 update_engine[1713]: I20250707 00:14:34.747169 1713 main.cc:92] Flatcar Update Engine starting Jul 7 00:14:34.749596 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:14:34.766551 extend-filesystems[1700]: Old size kept for /dev/nvme0n1p9 Jul 7 00:14:34.767818 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:14:34.767980 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:14:34.809242 systemd-logind[1712]: New seat seat0. Jul 7 00:14:34.810133 systemd-logind[1712]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:14:34.810682 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:14:34.816504 tar[1718]: linux-amd64/LICENSE Jul 7 00:14:34.816504 tar[1718]: linux-amd64/helm Jul 7 00:14:34.818952 dbus-daemon[1694]: [system] SELinux support is enabled Jul 7 00:14:34.819068 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:14:34.825054 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:14:34.825080 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:14:34.827107 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:14:34.827127 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:14:34.836039 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:14:34.839589 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:14:34.841302 update_engine[1713]: I20250707 00:14:34.839804 1713 update_check_scheduler.cc:74] Next update check in 6m15s Jul 7 00:14:34.846713 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:14:34.886055 bash[1764]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:14:34.886923 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:14:34.889726 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:14:34.926218 coreos-metadata[1693]: Jul 07 00:14:34.925 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 00:14:34.929589 coreos-metadata[1693]: Jul 07 00:14:34.929 INFO Fetch successful Jul 7 00:14:34.929589 coreos-metadata[1693]: Jul 07 00:14:34.929 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 00:14:34.934061 coreos-metadata[1693]: Jul 07 00:14:34.933 INFO Fetch successful Jul 7 00:14:34.934061 coreos-metadata[1693]: Jul 07 00:14:34.933 INFO Fetching http://168.63.129.16/machine/1020907e-fcdf-4ccb-8eee-5bac7801c38a/93adfcb6%2D82cd%2D4c4b%2Db0d3%2D05450b589080.%5Fci%2D4344.1.1%2Da%2D19494a3847?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 00:14:34.935518 coreos-metadata[1693]: Jul 07 00:14:34.935 INFO Fetch successful Jul 7 00:14:34.936956 coreos-metadata[1693]: Jul 07 00:14:34.936 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 00:14:34.950185 coreos-metadata[1693]: Jul 07 00:14:34.950 INFO Fetch successful Jul 7 00:14:35.055666 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:14:35.055834 (ntainerd)[1801]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:14:35.058198 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:14:35.075401 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:14:35.076275 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:14:35.107255 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:14:35.339151 sshd_keygen[1736]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:14:35.359936 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:14:35.365184 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:14:35.368415 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 00:14:35.390799 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:14:35.390966 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:14:35.395230 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:14:35.410641 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 00:14:35.416382 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:14:35.418691 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:14:35.421112 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:14:35.423611 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:14:35.555160 tar[1718]: linux-amd64/README.md Jul 7 00:14:35.567313 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:14:35.878354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:35.888876 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:14:36.202849 containerd[1801]: time="2025-07-07T00:14:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:14:36.203009 containerd[1801]: time="2025-07-07T00:14:36.202980065Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:14:36.210936 containerd[1801]: time="2025-07-07T00:14:36.210895446Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.207µs" Jul 7 00:14:36.210936 containerd[1801]: time="2025-07-07T00:14:36.210923514Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:14:36.211023 containerd[1801]: time="2025-07-07T00:14:36.210940326Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:14:36.211075 containerd[1801]: time="2025-07-07T00:14:36.211056458Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:14:36.211075 containerd[1801]: time="2025-07-07T00:14:36.211070859Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:14:36.211113 containerd[1801]: time="2025-07-07T00:14:36.211089408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211148 containerd[1801]: time="2025-07-07T00:14:36.211132990Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211148 containerd[1801]: time="2025-07-07T00:14:36.211144013Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211330859Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211345104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211355448Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211362793Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211422390Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211579996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211598623Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211606950Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211632663Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211826977Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:14:36.211937 containerd[1801]: time="2025-07-07T00:14:36.211860909Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:14:36.226026 containerd[1801]: time="2025-07-07T00:14:36.225900889Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:14:36.226026 containerd[1801]: time="2025-07-07T00:14:36.225979688Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:14:36.226026 containerd[1801]: time="2025-07-07T00:14:36.225995742Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:14:36.226026 containerd[1801]: time="2025-07-07T00:14:36.226006440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:14:36.226153 containerd[1801]: time="2025-07-07T00:14:36.226077340Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:14:36.226153 containerd[1801]: time="2025-07-07T00:14:36.226087744Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:14:36.226153 containerd[1801]: time="2025-07-07T00:14:36.226099323Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:14:36.226153 containerd[1801]: time="2025-07-07T00:14:36.226110020Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:14:36.226153 containerd[1801]: time="2025-07-07T00:14:36.226119405Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:14:36.226153 containerd[1801]: time="2025-07-07T00:14:36.226127989Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:14:36.226153 containerd[1801]: time="2025-07-07T00:14:36.226136651Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:14:36.226153 containerd[1801]: time="2025-07-07T00:14:36.226151733Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226248150Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226263054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226278034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226291920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226302289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226311730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226321203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226329540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226338810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226347354Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226356506Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226407367Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226417840Z" level=info msg="Start snapshots syncer" Jul 7 00:14:36.227146 containerd[1801]: time="2025-07-07T00:14:36.226725828Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:14:36.227389 containerd[1801]: time="2025-07-07T00:14:36.226976095Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:14:36.227389 containerd[1801]: time="2025-07-07T00:14:36.227016694Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227261996Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227402784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227419851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227428734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227439836Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227450792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227459154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227470160Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227491933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227500646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:14:36.227518 containerd[1801]: time="2025-07-07T00:14:36.227513649Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:14:36.227696 containerd[1801]: time="2025-07-07T00:14:36.227670317Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228024680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228051256Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228063591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228076782Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228089938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228104027Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228119866Z" level=info msg="runtime interface created" Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228124711Z" level=info msg="created NRI interface" Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228136346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228147501Z" level=info msg="Connect containerd service" Jul 7 00:14:36.228642 containerd[1801]: time="2025-07-07T00:14:36.228186145Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:14:36.229963 containerd[1801]: time="2025-07-07T00:14:36.229934068Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:14:36.350143 kubelet[1848]: E0707 00:14:36.350110 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:14:36.351474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:14:36.351593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:14:36.351825 systemd[1]: kubelet.service: Consumed 838ms CPU time, 261.9M memory peak. Jul 7 00:14:36.572299 waagent[1832]: 2025-07-07T00:14:36.572206Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 7 00:14:36.574931 waagent[1832]: 2025-07-07T00:14:36.574578Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 7 00:14:36.576627 waagent[1832]: 2025-07-07T00:14:36.576581Z INFO Daemon Daemon Python: 3.11.12 Jul 7 00:14:36.577915 waagent[1832]: 2025-07-07T00:14:36.577833Z INFO Daemon Daemon Run daemon Jul 7 00:14:36.580749 waagent[1832]: 2025-07-07T00:14:36.580697Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 7 00:14:36.583864 waagent[1832]: 2025-07-07T00:14:36.583752Z INFO Daemon Daemon Using waagent for provisioning Jul 7 00:14:36.585643 waagent[1832]: 2025-07-07T00:14:36.585607Z INFO Daemon Daemon Activate resource disk Jul 7 00:14:36.586696 waagent[1832]: 2025-07-07T00:14:36.586667Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 00:14:36.589451 waagent[1832]: 2025-07-07T00:14:36.589413Z INFO Daemon Daemon Found device: None Jul 7 00:14:36.590456 waagent[1832]: 2025-07-07T00:14:36.590426Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 00:14:36.592236 waagent[1832]: 2025-07-07T00:14:36.592207Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 00:14:36.596021 waagent[1832]: 2025-07-07T00:14:36.595915Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 00:14:36.598631 waagent[1832]: 2025-07-07T00:14:36.598594Z INFO Daemon Daemon Running default provisioning handler Jul 7 00:14:36.606115 waagent[1832]: 2025-07-07T00:14:36.606072Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 00:14:36.610858 waagent[1832]: 2025-07-07T00:14:36.610824Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 00:14:36.614597 waagent[1832]: 2025-07-07T00:14:36.614564Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 00:14:36.617583 waagent[1832]: 2025-07-07T00:14:36.617555Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 00:14:36.681598 waagent[1832]: 2025-07-07T00:14:36.681560Z INFO Daemon Daemon Successfully mounted dvd Jul 7 00:14:36.704358 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 00:14:36.706131 waagent[1832]: 2025-07-07T00:14:36.705041Z INFO Daemon Daemon Detect protocol endpoint Jul 7 00:14:36.708653 waagent[1832]: 2025-07-07T00:14:36.708611Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 00:14:36.709980 waagent[1832]: 2025-07-07T00:14:36.709935Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 00:14:36.712592 waagent[1832]: 2025-07-07T00:14:36.712569Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 00:14:36.715666 waagent[1832]: 2025-07-07T00:14:36.715639Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 00:14:36.716782 waagent[1832]: 2025-07-07T00:14:36.716729Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 00:14:36.726270 waagent[1832]: 2025-07-07T00:14:36.726243Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 00:14:36.729713 waagent[1832]: 2025-07-07T00:14:36.729688Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 00:14:36.730951 waagent[1832]: 2025-07-07T00:14:36.730899Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 00:14:36.876805 waagent[1832]: 2025-07-07T00:14:36.876720Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 00:14:36.878222 waagent[1832]: 2025-07-07T00:14:36.878188Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 00:14:36.884448 waagent[1832]: 2025-07-07T00:14:36.884421Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 00:14:36.899053 waagent[1832]: 2025-07-07T00:14:36.899001Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 00:14:36.900785 containerd[1801]: time="2025-07-07T00:14:36.900748230Z" level=info msg="Start subscribing containerd event" Jul 7 00:14:36.900839 containerd[1801]: time="2025-07-07T00:14:36.900797549Z" level=info msg="Start recovering state" Jul 7 00:14:36.900945 containerd[1801]: time="2025-07-07T00:14:36.900911412Z" level=info msg="Start event monitor" Jul 7 00:14:36.901089 containerd[1801]: time="2025-07-07T00:14:36.900924686Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:14:36.901089 containerd[1801]: time="2025-07-07T00:14:36.901045719Z" level=info msg="Start streaming server" Jul 7 00:14:36.901135 containerd[1801]: time="2025-07-07T00:14:36.901061929Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:14:36.901135 containerd[1801]: time="2025-07-07T00:14:36.901121765Z" level=info msg="runtime interface starting up..." Jul 7 00:14:36.901135 containerd[1801]: time="2025-07-07T00:14:36.901127724Z" level=info msg="starting plugins..." Jul 7 00:14:36.901275 containerd[1801]: time="2025-07-07T00:14:36.901141535Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:14:36.901275 containerd[1801]: time="2025-07-07T00:14:36.901066002Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:14:36.901275 containerd[1801]: time="2025-07-07T00:14:36.901236192Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:14:36.901275 containerd[1801]: time="2025-07-07T00:14:36.901272402Z" level=info msg="containerd successfully booted in 0.699006s" Jul 7 00:14:36.901408 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:14:36.903539 waagent[1832]: 2025-07-07T00:14:36.902949Z INFO Daemon Jul 7 00:14:36.903539 waagent[1832]: 2025-07-07T00:14:36.903170Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d0f173a2-db65-4fea-898a-2919176b4949 eTag: 4703023289220897990 source: Fabric] Jul 7 00:14:36.903539 waagent[1832]: 2025-07-07T00:14:36.903394Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 00:14:36.909482 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:14:36.912867 waagent[1832]: 2025-07-07T00:14:36.912835Z INFO Daemon Jul 7 00:14:36.914094 waagent[1832]: 2025-07-07T00:14:36.914071Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 00:14:36.916304 systemd[1]: Startup finished in 2.826s (kernel) + 11.063s (initrd) + 10.698s (userspace) = 24.588s. Jul 7 00:14:36.917736 waagent[1832]: 2025-07-07T00:14:36.917616Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 00:14:37.000915 waagent[1832]: 2025-07-07T00:14:37.000876Z INFO Daemon Downloaded certificate {'thumbprint': 'F5F1C50B7B74EF47E0BFB38233FCB8F4A3FEA2EC', 'hasPrivateKey': True} Jul 7 00:14:37.001285 waagent[1832]: 2025-07-07T00:14:37.001260Z INFO Daemon Fetch goal state completed Jul 7 00:14:37.013379 waagent[1832]: 2025-07-07T00:14:37.013345Z INFO Daemon Daemon Starting provisioning Jul 7 00:14:37.014118 waagent[1832]: 2025-07-07T00:14:37.013505Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 00:14:37.014118 waagent[1832]: 2025-07-07T00:14:37.013852Z INFO Daemon Daemon Set hostname [ci-4344.1.1-a-19494a3847] Jul 7 00:14:37.029540 waagent[1832]: 2025-07-07T00:14:37.029491Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-a-19494a3847] Jul 7 00:14:37.029821 waagent[1832]: 2025-07-07T00:14:37.029737Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 00:14:37.035192 waagent[1832]: 2025-07-07T00:14:37.029920Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 00:14:37.036136 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:14:37.036142 systemd-networkd[1369]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:14:37.036160 systemd-networkd[1369]: eth0: DHCP lease lost Jul 7 00:14:37.036872 waagent[1832]: 2025-07-07T00:14:37.036834Z INFO Daemon Daemon Create user account if not exists Jul 7 00:14:37.038159 waagent[1832]: 2025-07-07T00:14:37.038074Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 00:14:37.038159 waagent[1832]: 2025-07-07T00:14:37.038176Z INFO Daemon Daemon Configure sudoer Jul 7 00:14:37.042078 waagent[1832]: 2025-07-07T00:14:37.042040Z INFO Daemon Daemon Configure sshd Jul 7 00:14:37.046064 waagent[1832]: 2025-07-07T00:14:37.046032Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 00:14:37.047458 waagent[1832]: 2025-07-07T00:14:37.046795Z INFO Daemon Daemon Deploy ssh public key. Jul 7 00:14:37.060561 systemd-networkd[1369]: eth0: DHCPv4 address 10.200.4.38/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 00:14:37.160025 login[1835]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 7 00:14:37.160181 login[1834]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:14:37.167433 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:14:37.168586 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:14:37.170504 systemd-logind[1712]: New session 2 of user core. Jul 7 00:14:37.184216 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:14:37.185855 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:14:37.211141 (systemd)[1896]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:14:37.212741 systemd-logind[1712]: New session c1 of user core. Jul 7 00:14:37.330574 systemd[1896]: Queued start job for default target default.target. Jul 7 00:14:37.337161 systemd[1896]: Created slice app.slice - User Application Slice. Jul 7 00:14:37.337187 systemd[1896]: Reached target paths.target - Paths. Jul 7 00:14:37.337213 systemd[1896]: Reached target timers.target - Timers. Jul 7 00:14:37.337954 systemd[1896]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:14:37.344496 systemd[1896]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:14:37.344556 systemd[1896]: Reached target sockets.target - Sockets. Jul 7 00:14:37.344593 systemd[1896]: Reached target basic.target - Basic System. Jul 7 00:14:37.344645 systemd[1896]: Reached target default.target - Main User Target. Jul 7 00:14:37.344662 systemd[1896]: Startup finished in 128ms. Jul 7 00:14:37.344686 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:14:37.346029 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:14:38.136003 waagent[1832]: 2025-07-07T00:14:38.135971Z INFO Daemon Daemon Provisioning complete Jul 7 00:14:38.143427 waagent[1832]: 2025-07-07T00:14:38.143403Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 00:14:38.148592 waagent[1832]: 2025-07-07T00:14:38.143567Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 00:14:38.148592 waagent[1832]: 2025-07-07T00:14:38.144019Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 7 00:14:38.161804 login[1835]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:14:38.166465 systemd-logind[1712]: New session 1 of user core. Jul 7 00:14:38.170681 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:14:38.244740 waagent[1916]: 2025-07-07T00:14:38.244682Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 7 00:14:38.244936 waagent[1916]: 2025-07-07T00:14:38.244771Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 7 00:14:38.244936 waagent[1916]: 2025-07-07T00:14:38.244813Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 7 00:14:38.244936 waagent[1916]: 2025-07-07T00:14:38.244854Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 7 00:14:38.264325 waagent[1916]: 2025-07-07T00:14:38.264282Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 7 00:14:38.264442 waagent[1916]: 2025-07-07T00:14:38.264420Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:14:38.264489 waagent[1916]: 2025-07-07T00:14:38.264471Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:14:38.269723 waagent[1916]: 2025-07-07T00:14:38.269681Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 00:14:38.278065 waagent[1916]: 2025-07-07T00:14:38.278033Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 00:14:38.278371 waagent[1916]: 2025-07-07T00:14:38.278345Z INFO ExtHandler Jul 7 00:14:38.278419 waagent[1916]: 2025-07-07T00:14:38.278395Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6d041f8e-e352-429e-966f-bb5c74121f89 eTag: 4703023289220897990 source: Fabric] Jul 7 00:14:38.278609 waagent[1916]: 2025-07-07T00:14:38.278587Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 00:14:38.278891 waagent[1916]: 2025-07-07T00:14:38.278870Z INFO ExtHandler Jul 7 00:14:38.278927 waagent[1916]: 2025-07-07T00:14:38.278908Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 00:14:38.280962 waagent[1916]: 2025-07-07T00:14:38.280941Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 00:14:38.340192 waagent[1916]: 2025-07-07T00:14:38.340145Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F5F1C50B7B74EF47E0BFB38233FCB8F4A3FEA2EC', 'hasPrivateKey': True} Jul 7 00:14:38.340502 waagent[1916]: 2025-07-07T00:14:38.340474Z INFO ExtHandler Fetch goal state completed Jul 7 00:14:38.349471 waagent[1916]: 2025-07-07T00:14:38.349431Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 7 00:14:38.353140 waagent[1916]: 2025-07-07T00:14:38.353096Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1916 Jul 7 00:14:38.353242 waagent[1916]: 2025-07-07T00:14:38.353205Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 00:14:38.353441 waagent[1916]: 2025-07-07T00:14:38.353423Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 7 00:14:38.354344 waagent[1916]: 2025-07-07T00:14:38.354316Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 00:14:38.354634 waagent[1916]: 2025-07-07T00:14:38.354608Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 7 00:14:38.354734 waagent[1916]: 2025-07-07T00:14:38.354716Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 7 00:14:38.355077 waagent[1916]: 2025-07-07T00:14:38.355055Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 00:14:38.401646 waagent[1916]: 2025-07-07T00:14:38.401591Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 00:14:38.401745 waagent[1916]: 2025-07-07T00:14:38.401723Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 00:14:38.405934 waagent[1916]: 2025-07-07T00:14:38.405809Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 00:14:38.410759 systemd[1]: Reload requested from client PID 1939 ('systemctl') (unit waagent.service)... Jul 7 00:14:38.410778 systemd[1]: Reloading... Jul 7 00:14:38.481543 zram_generator::config[1977]: No configuration found. Jul 7 00:14:38.555386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:14:38.639642 systemd[1]: Reloading finished in 228 ms. Jul 7 00:14:38.656947 waagent[1916]: 2025-07-07T00:14:38.656629Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 00:14:38.656947 waagent[1916]: 2025-07-07T00:14:38.656707Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 00:14:38.984768 waagent[1916]: 2025-07-07T00:14:38.984705Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 00:14:38.984949 waagent[1916]: 2025-07-07T00:14:38.984926Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 7 00:14:38.985517 waagent[1916]: 2025-07-07T00:14:38.985490Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 00:14:38.985686 waagent[1916]: 2025-07-07T00:14:38.985664Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:14:38.985742 waagent[1916]: 2025-07-07T00:14:38.985719Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:14:38.985905 waagent[1916]: 2025-07-07T00:14:38.985887Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 00:14:38.986107 waagent[1916]: 2025-07-07T00:14:38.986068Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 00:14:38.986210 waagent[1916]: 2025-07-07T00:14:38.986178Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:14:38.986428 waagent[1916]: 2025-07-07T00:14:38.986398Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 00:14:38.986497 waagent[1916]: 2025-07-07T00:14:38.986476Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 00:14:38.986591 waagent[1916]: 2025-07-07T00:14:38.986558Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 00:14:38.986591 waagent[1916]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 00:14:38.986591 waagent[1916]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 00:14:38.986591 waagent[1916]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 00:14:38.986591 waagent[1916]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:14:38.986591 waagent[1916]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:14:38.986591 waagent[1916]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:14:38.986856 waagent[1916]: 2025-07-07T00:14:38.986824Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 00:14:38.987047 waagent[1916]: 2025-07-07T00:14:38.987029Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 00:14:38.987107 waagent[1916]: 2025-07-07T00:14:38.987075Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 00:14:38.987386 waagent[1916]: 2025-07-07T00:14:38.987365Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:14:38.987499 waagent[1916]: 2025-07-07T00:14:38.987480Z INFO EnvHandler ExtHandler Configure routes Jul 7 00:14:38.987770 waagent[1916]: 2025-07-07T00:14:38.987752Z INFO EnvHandler ExtHandler Gateway:None Jul 7 00:14:38.987810 waagent[1916]: 2025-07-07T00:14:38.987793Z INFO EnvHandler ExtHandler Routes:None Jul 7 00:14:38.993447 waagent[1916]: 2025-07-07T00:14:38.993422Z INFO ExtHandler ExtHandler Jul 7 00:14:38.993502 waagent[1916]: 2025-07-07T00:14:38.993470Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 22753416-8942-4d51-ae09-47900c69f7b3 correlation 9be6a65e-9e90-4457-86d2-743df594165a created: 2025-07-07T00:13:46.337852Z] Jul 7 00:14:38.993718 waagent[1916]: 2025-07-07T00:14:38.993695Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 00:14:38.994050 waagent[1916]: 2025-07-07T00:14:38.994029Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 7 00:14:39.016640 waagent[1916]: 2025-07-07T00:14:39.016483Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 7 00:14:39.016640 waagent[1916]: Try `iptables -h' or 'iptables --help' for more information.) Jul 7 00:14:39.016851 waagent[1916]: 2025-07-07T00:14:39.016824Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EB236F1D-06CC-40B2-AFC3-CD6ABEC5C37D;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 7 00:14:39.031959 waagent[1916]: 2025-07-07T00:14:39.031921Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 00:14:39.031959 waagent[1916]: Executing ['ip', '-a', '-o', 'link']: Jul 7 00:14:39.031959 waagent[1916]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 00:14:39.031959 waagent[1916]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d1:8d:06 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 7 00:14:39.031959 waagent[1916]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d1:8d:06 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 7 00:14:39.031959 waagent[1916]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 00:14:39.031959 waagent[1916]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 00:14:39.031959 waagent[1916]: 2: eth0 inet 10.200.4.38/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 00:14:39.031959 waagent[1916]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 00:14:39.031959 waagent[1916]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 00:14:39.031959 waagent[1916]: 2: eth0 inet6 fe80::6245:bdff:fed1:8d06/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 00:14:39.031959 waagent[1916]: 3: enP30832s1 inet6 fe80::6245:bdff:fed1:8d06/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 00:14:39.059958 waagent[1916]: 2025-07-07T00:14:39.059917Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 7 00:14:39.059958 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:14:39.059958 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 00:14:39.059958 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:14:39.059958 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 00:14:39.059958 waagent[1916]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Jul 7 00:14:39.059958 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 00:14:39.059958 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 00:14:39.059958 waagent[1916]: 7 940 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 00:14:39.059958 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 00:14:39.063092 waagent[1916]: 2025-07-07T00:14:39.063059Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 00:14:39.063092 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:14:39.063092 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 00:14:39.063092 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:14:39.063092 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 00:14:39.063092 waagent[1916]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Jul 7 00:14:39.063092 waagent[1916]: pkts bytes target prot opt in out source destination Jul 7 00:14:39.063092 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 00:14:39.063092 waagent[1916]: 13 1460 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 00:14:39.063092 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 00:14:46.602371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:14:46.603720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:47.293275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:47.297771 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:14:47.330566 kubelet[2075]: E0707 00:14:47.330517 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:14:47.332913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:14:47.333019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:14:47.333292 systemd[1]: kubelet.service: Consumed 117ms CPU time, 108.8M memory peak. Jul 7 00:14:57.395088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:14:57.396405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:57.848609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:57.851208 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:14:57.885227 kubelet[2091]: E0707 00:14:57.885205 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:14:57.886728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:14:57.886835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:14:57.887158 systemd[1]: kubelet.service: Consumed 111ms CPU time, 110.1M memory peak. Jul 7 00:14:58.525845 chronyd[1732]: Selected source PHC0 Jul 7 00:15:03.549626 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:15:03.550646 systemd[1]: Started sshd@0-10.200.4.38:22-10.200.16.10:43282.service - OpenSSH per-connection server daemon (10.200.16.10:43282). Jul 7 00:15:04.221859 sshd[2099]: Accepted publickey for core from 10.200.16.10 port 43282 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:15:04.222753 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:04.226363 systemd-logind[1712]: New session 3 of user core. Jul 7 00:15:04.232653 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:15:04.751760 systemd[1]: Started sshd@1-10.200.4.38:22-10.200.16.10:43288.service - OpenSSH per-connection server daemon (10.200.16.10:43288). Jul 7 00:15:05.340500 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 43288 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:15:05.341346 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:05.344570 systemd-logind[1712]: New session 4 of user core. Jul 7 00:15:05.355630 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:15:05.757280 sshd[2106]: Connection closed by 10.200.16.10 port 43288 Jul 7 00:15:05.757644 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:05.759438 systemd[1]: sshd@1-10.200.4.38:22-10.200.16.10:43288.service: Deactivated successfully. Jul 7 00:15:05.761589 systemd-logind[1712]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:15:05.761685 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:15:05.762494 systemd-logind[1712]: Removed session 4. Jul 7 00:15:05.865386 systemd[1]: Started sshd@2-10.200.4.38:22-10.200.16.10:43296.service - OpenSSH per-connection server daemon (10.200.16.10:43296). Jul 7 00:15:06.467768 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 43296 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:15:06.468575 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:06.472041 systemd-logind[1712]: New session 5 of user core. Jul 7 00:15:06.478645 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:15:06.891436 sshd[2114]: Connection closed by 10.200.16.10 port 43296 Jul 7 00:15:06.891886 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:06.893812 systemd[1]: sshd@2-10.200.4.38:22-10.200.16.10:43296.service: Deactivated successfully. Jul 7 00:15:06.894970 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:15:06.896292 systemd-logind[1712]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:15:06.896928 systemd-logind[1712]: Removed session 5. Jul 7 00:15:06.995400 systemd[1]: Started sshd@3-10.200.4.38:22-10.200.16.10:43302.service - OpenSSH per-connection server daemon (10.200.16.10:43302). Jul 7 00:15:07.585244 sshd[2120]: Accepted publickey for core from 10.200.16.10 port 43302 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:15:07.586040 sshd-session[2120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:07.589431 systemd-logind[1712]: New session 6 of user core. Jul 7 00:15:07.598639 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:15:07.894873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 00:15:07.895993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:08.008283 sshd[2122]: Connection closed by 10.200.16.10 port 43302 Jul 7 00:15:08.008974 sshd-session[2120]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:08.010725 systemd[1]: sshd@3-10.200.4.38:22-10.200.16.10:43302.service: Deactivated successfully. Jul 7 00:15:08.011753 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:15:08.013052 systemd-logind[1712]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:15:08.013690 systemd-logind[1712]: Removed session 6. Jul 7 00:15:08.112361 systemd[1]: Started sshd@4-10.200.4.38:22-10.200.16.10:43306.service - OpenSSH per-connection server daemon (10.200.16.10:43306). Jul 7 00:15:08.349290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:08.357748 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:15:08.387180 kubelet[2138]: E0707 00:15:08.387151 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:15:08.388393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:15:08.388514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:15:08.388798 systemd[1]: kubelet.service: Consumed 114ms CPU time, 108.4M memory peak. Jul 7 00:15:08.713946 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 43306 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:15:08.714755 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:08.718318 systemd-logind[1712]: New session 7 of user core. Jul 7 00:15:08.724640 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:15:09.132702 sudo[2146]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:15:09.132899 sudo[2146]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:15:09.158313 sudo[2146]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:09.252370 sshd[2145]: Connection closed by 10.200.16.10 port 43306 Jul 7 00:15:09.252898 sshd-session[2131]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:09.255148 systemd[1]: sshd@4-10.200.4.38:22-10.200.16.10:43306.service: Deactivated successfully. Jul 7 00:15:09.256281 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:15:09.257347 systemd-logind[1712]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:15:09.258275 systemd-logind[1712]: Removed session 7. Jul 7 00:15:09.356459 systemd[1]: Started sshd@5-10.200.4.38:22-10.200.16.10:43322.service - OpenSSH per-connection server daemon (10.200.16.10:43322). Jul 7 00:15:09.953217 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 43322 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:15:09.954025 sshd-session[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:09.957380 systemd-logind[1712]: New session 8 of user core. Jul 7 00:15:09.966629 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:15:10.279923 sudo[2156]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:15:10.280108 sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:15:10.286012 sudo[2156]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:10.289288 sudo[2155]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:15:10.289468 sudo[2155]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:15:10.295707 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:15:10.325147 augenrules[2178]: No rules Jul 7 00:15:10.325899 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:15:10.326056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:15:10.326823 sudo[2155]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:10.421357 sshd[2154]: Connection closed by 10.200.16.10 port 43322 Jul 7 00:15:10.421677 sshd-session[2152]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:10.423541 systemd[1]: sshd@5-10.200.4.38:22-10.200.16.10:43322.service: Deactivated successfully. Jul 7 00:15:10.424606 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:15:10.425517 systemd-logind[1712]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:15:10.426350 systemd-logind[1712]: Removed session 8. Jul 7 00:15:10.529342 systemd[1]: Started sshd@6-10.200.4.38:22-10.200.16.10:48672.service - OpenSSH per-connection server daemon (10.200.16.10:48672). Jul 7 00:15:11.126167 sshd[2187]: Accepted publickey for core from 10.200.16.10 port 48672 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:15:11.126982 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:11.130244 systemd-logind[1712]: New session 9 of user core. Jul 7 00:15:11.140630 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:15:11.452295 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:15:11.452476 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:15:12.950223 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:15:12.963764 (dockerd)[2208]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:15:13.668066 dockerd[2208]: time="2025-07-07T00:15:13.668025146Z" level=info msg="Starting up" Jul 7 00:15:13.668605 dockerd[2208]: time="2025-07-07T00:15:13.668585133Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:15:13.755581 dockerd[2208]: time="2025-07-07T00:15:13.755555640Z" level=info msg="Loading containers: start." Jul 7 00:15:13.781548 kernel: Initializing XFRM netlink socket Jul 7 00:15:14.062715 systemd-networkd[1369]: docker0: Link UP Jul 7 00:15:14.076274 dockerd[2208]: time="2025-07-07T00:15:14.076250410Z" level=info msg="Loading containers: done." Jul 7 00:15:14.101849 dockerd[2208]: time="2025-07-07T00:15:14.101824286Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:15:14.101944 dockerd[2208]: time="2025-07-07T00:15:14.101879116Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:15:14.101971 dockerd[2208]: time="2025-07-07T00:15:14.101947330Z" level=info msg="Initializing buildkit" Jul 7 00:15:14.145342 dockerd[2208]: time="2025-07-07T00:15:14.145309440Z" level=info msg="Completed buildkit initialization" Jul 7 00:15:14.150232 dockerd[2208]: time="2025-07-07T00:15:14.150195766Z" level=info msg="Daemon has completed initialization" Jul 7 00:15:14.150348 dockerd[2208]: time="2025-07-07T00:15:14.150248000Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:15:14.150392 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:15:15.373952 containerd[1801]: time="2025-07-07T00:15:15.373924636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:15:16.182473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967047781.mount: Deactivated successfully. Jul 7 00:15:17.514830 containerd[1801]: time="2025-07-07T00:15:17.514791889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:17.517274 containerd[1801]: time="2025-07-07T00:15:17.517241728Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 7 00:15:17.520133 containerd[1801]: time="2025-07-07T00:15:17.520096297Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:17.523801 containerd[1801]: time="2025-07-07T00:15:17.523763783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:17.524478 containerd[1801]: time="2025-07-07T00:15:17.524345238Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.150386432s" Jul 7 00:15:17.524478 containerd[1801]: time="2025-07-07T00:15:17.524375845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 00:15:17.524986 containerd[1801]: time="2025-07-07T00:15:17.524945117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:15:18.395640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 00:15:18.397831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:18.568557 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 7 00:15:19.014871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:19.017633 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:15:19.049091 kubelet[2466]: E0707 00:15:19.049067 2466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:15:19.050404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:15:19.050595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:15:19.050855 systemd[1]: kubelet.service: Consumed 119ms CPU time, 110.4M memory peak. Jul 7 00:15:19.428256 containerd[1801]: time="2025-07-07T00:15:19.428219599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:19.430927 containerd[1801]: time="2025-07-07T00:15:19.430896573Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 7 00:15:19.433850 containerd[1801]: time="2025-07-07T00:15:19.433818416Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:19.437516 containerd[1801]: time="2025-07-07T00:15:19.437478694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:19.438112 containerd[1801]: time="2025-07-07T00:15:19.437991285Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.913021489s" Jul 7 00:15:19.438112 containerd[1801]: time="2025-07-07T00:15:19.438018592Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 00:15:19.438545 containerd[1801]: time="2025-07-07T00:15:19.438532268Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:15:20.334182 update_engine[1713]: I20250707 00:15:20.334117 1713 update_attempter.cc:509] Updating boot flags... Jul 7 00:15:20.940418 containerd[1801]: time="2025-07-07T00:15:20.940387332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:20.942879 containerd[1801]: time="2025-07-07T00:15:20.942849683Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 7 00:15:20.945673 containerd[1801]: time="2025-07-07T00:15:20.945643300Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:20.951754 containerd[1801]: time="2025-07-07T00:15:20.951715551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:20.952438 containerd[1801]: time="2025-07-07T00:15:20.952278089Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.513708183s" Jul 7 00:15:20.952438 containerd[1801]: time="2025-07-07T00:15:20.952303251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 00:15:20.952882 containerd[1801]: time="2025-07-07T00:15:20.952865772Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:15:21.938635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236198791.mount: Deactivated successfully. Jul 7 00:15:22.414044 containerd[1801]: time="2025-07-07T00:15:22.414010047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:22.416570 containerd[1801]: time="2025-07-07T00:15:22.416542635Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 7 00:15:22.419629 containerd[1801]: time="2025-07-07T00:15:22.419597563Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:22.423151 containerd[1801]: time="2025-07-07T00:15:22.423112231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:22.423377 containerd[1801]: time="2025-07-07T00:15:22.423358734Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.470470175s" Jul 7 00:15:22.423409 containerd[1801]: time="2025-07-07T00:15:22.423384711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 00:15:22.423858 containerd[1801]: time="2025-07-07T00:15:22.423826788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:15:23.105184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890204123.mount: Deactivated successfully. Jul 7 00:15:23.953345 containerd[1801]: time="2025-07-07T00:15:23.953313977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:23.955914 containerd[1801]: time="2025-07-07T00:15:23.955866902Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 00:15:23.958607 containerd[1801]: time="2025-07-07T00:15:23.958574042Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:23.963371 containerd[1801]: time="2025-07-07T00:15:23.963323338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:23.964097 containerd[1801]: time="2025-07-07T00:15:23.963976005Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.540116082s" Jul 7 00:15:23.964097 containerd[1801]: time="2025-07-07T00:15:23.964002219Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:15:23.964505 containerd[1801]: time="2025-07-07T00:15:23.964489319Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:15:24.571831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844941035.mount: Deactivated successfully. Jul 7 00:15:24.589482 containerd[1801]: time="2025-07-07T00:15:24.589457627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:15:24.591972 containerd[1801]: time="2025-07-07T00:15:24.591942875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 00:15:24.594989 containerd[1801]: time="2025-07-07T00:15:24.594956295Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:15:24.598620 containerd[1801]: time="2025-07-07T00:15:24.598588298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:15:24.599264 containerd[1801]: time="2025-07-07T00:15:24.599002852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 634.489612ms" Jul 7 00:15:24.599264 containerd[1801]: time="2025-07-07T00:15:24.599025119Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:15:24.599477 containerd[1801]: time="2025-07-07T00:15:24.599460146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:15:25.293096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066341209.mount: Deactivated successfully. Jul 7 00:15:27.013709 containerd[1801]: time="2025-07-07T00:15:27.013672213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:27.016584 containerd[1801]: time="2025-07-07T00:15:27.016556836Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 7 00:15:27.020252 containerd[1801]: time="2025-07-07T00:15:27.020179518Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:27.026861 containerd[1801]: time="2025-07-07T00:15:27.026807787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:27.027533 containerd[1801]: time="2025-07-07T00:15:27.027493752Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.427976292s" Jul 7 00:15:27.027533 containerd[1801]: time="2025-07-07T00:15:27.027531180Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 00:15:29.145075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 7 00:15:29.148715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:29.283551 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:15:29.283622 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:15:29.283891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:29.286830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:29.308442 systemd[1]: Reload requested from client PID 2665 ('systemctl') (unit session-9.scope)... Jul 7 00:15:29.308453 systemd[1]: Reloading... Jul 7 00:15:29.387541 zram_generator::config[2707]: No configuration found. Jul 7 00:15:29.462965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:15:29.552887 systemd[1]: Reloading finished in 244 ms. Jul 7 00:15:29.584675 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:15:29.584738 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:15:29.584961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:29.585007 systemd[1]: kubelet.service: Consumed 58ms CPU time, 67.1M memory peak. Jul 7 00:15:29.586094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:30.145711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:30.148788 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:15:30.182775 kubelet[2778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:15:30.182965 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:15:30.182965 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:15:30.182965 kubelet[2778]: I0707 00:15:30.182882 2778 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:15:30.549434 kubelet[2778]: I0707 00:15:30.549222 2778 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:15:30.549434 kubelet[2778]: I0707 00:15:30.549239 2778 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:15:30.549590 kubelet[2778]: I0707 00:15:30.549576 2778 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:15:30.569614 kubelet[2778]: E0707 00:15:30.569582 2778 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.38:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:30.570317 kubelet[2778]: I0707 00:15:30.570232 2778 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:15:30.576793 kubelet[2778]: I0707 00:15:30.576780 2778 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:15:30.578575 kubelet[2778]: I0707 00:15:30.578561 2778 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:15:30.580099 kubelet[2778]: I0707 00:15:30.580069 2778 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:15:30.580246 kubelet[2778]: I0707 00:15:30.580097 2778 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-19494a3847","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:15:30.580351 kubelet[2778]: I0707 00:15:30.580252 2778 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:15:30.580351 kubelet[2778]: I0707 00:15:30.580261 2778 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:15:30.580394 kubelet[2778]: I0707 00:15:30.580355 2778 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:15:30.583034 kubelet[2778]: I0707 00:15:30.583022 2778 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:15:30.583091 kubelet[2778]: I0707 00:15:30.583044 2778 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:15:30.583091 kubelet[2778]: I0707 00:15:30.583064 2778 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:15:30.583091 kubelet[2778]: I0707 00:15:30.583073 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:15:30.588555 kubelet[2778]: W0707 00:15:30.588281 2778 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.38:6443: connect: connection refused Jul 7 00:15:30.588555 kubelet[2778]: E0707 00:15:30.588336 2778 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.38:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:30.588684 kubelet[2778]: W0707 00:15:30.588671 2778 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-19494a3847&limit=500&resourceVersion=0": dial tcp 10.200.4.38:6443: connect: connection refused Jul 7 00:15:30.588753 kubelet[2778]: E0707 00:15:30.588729 2778 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-19494a3847&limit=500&resourceVersion=0\": dial tcp 10.200.4.38:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:30.589254 kubelet[2778]: I0707 00:15:30.588820 2778 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:15:30.589254 kubelet[2778]: I0707 00:15:30.589158 2778 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:15:30.589757 kubelet[2778]: W0707 00:15:30.589741 2778 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:15:30.591443 kubelet[2778]: I0707 00:15:30.591427 2778 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:15:30.591504 kubelet[2778]: I0707 00:15:30.591454 2778 server.go:1287] "Started kubelet" Jul 7 00:15:30.591602 kubelet[2778]: I0707 00:15:30.591565 2778 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:15:30.592482 kubelet[2778]: I0707 00:15:30.592185 2778 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:15:30.594381 kubelet[2778]: I0707 00:15:30.593806 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:15:30.596554 kubelet[2778]: I0707 00:15:30.596141 2778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:15:30.596554 kubelet[2778]: I0707 00:15:30.596312 2778 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:15:30.597816 kubelet[2778]: E0707 00:15:30.596601 2778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-a-19494a3847.184fcfdd909e7740 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-a-19494a3847,UID:ci-4344.1.1-a-19494a3847,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-a-19494a3847,},FirstTimestamp:2025-07-07 00:15:30.591438656 +0000 UTC m=+0.439789534,LastTimestamp:2025-07-07 00:15:30.591438656 +0000 UTC m=+0.439789534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-a-19494a3847,}" Jul 7 00:15:30.598933 kubelet[2778]: I0707 00:15:30.598718 2778 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:15:30.599975 kubelet[2778]: I0707 00:15:30.599736 2778 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:15:30.599975 kubelet[2778]: E0707 00:15:30.599885 2778 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-19494a3847\" not found" Jul 7 00:15:30.600331 kubelet[2778]: E0707 00:15:30.600313 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-19494a3847?timeout=10s\": dial tcp 10.200.4.38:6443: connect: connection refused" interval="200ms" Jul 7 00:15:30.600541 kubelet[2778]: I0707 00:15:30.600518 2778 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:15:30.600640 kubelet[2778]: I0707 00:15:30.600631 2778 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:15:30.602117 kubelet[2778]: I0707 00:15:30.602107 2778 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:15:30.605001 kubelet[2778]: W0707 00:15:30.604965 2778 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.38:6443: connect: connection refused Jul 7 00:15:30.606564 kubelet[2778]: E0707 00:15:30.606039 2778 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.38:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:30.606564 kubelet[2778]: I0707 00:15:30.602141 2778 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:15:30.606789 kubelet[2778]: I0707 00:15:30.602113 2778 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:15:30.623694 kubelet[2778]: E0707 00:15:30.623669 2778 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:15:30.628457 kubelet[2778]: I0707 00:15:30.628432 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:15:30.630396 kubelet[2778]: I0707 00:15:30.630122 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:15:30.630396 kubelet[2778]: I0707 00:15:30.630141 2778 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:15:30.630396 kubelet[2778]: I0707 00:15:30.630155 2778 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:15:30.630396 kubelet[2778]: I0707 00:15:30.630160 2778 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:15:30.630396 kubelet[2778]: E0707 00:15:30.630194 2778 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:15:30.631382 kubelet[2778]: W0707 00:15:30.631352 2778 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.38:6443: connect: connection refused Jul 7 00:15:30.631490 kubelet[2778]: E0707 00:15:30.631479 2778 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.38:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:30.632516 kubelet[2778]: I0707 00:15:30.632371 2778 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:15:30.632516 kubelet[2778]: I0707 00:15:30.632383 2778 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:15:30.632516 kubelet[2778]: I0707 00:15:30.632396 2778 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:15:30.639958 kubelet[2778]: I0707 00:15:30.639943 2778 policy_none.go:49] "None policy: Start" Jul 7 00:15:30.639958 kubelet[2778]: I0707 00:15:30.639958 2778 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:15:30.640048 kubelet[2778]: I0707 00:15:30.639968 2778 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:15:30.649584 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:15:30.660787 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:15:30.663019 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:15:30.670339 kubelet[2778]: I0707 00:15:30.669959 2778 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:15:30.670339 kubelet[2778]: I0707 00:15:30.670082 2778 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:15:30.670339 kubelet[2778]: I0707 00:15:30.670090 2778 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:15:30.670339 kubelet[2778]: I0707 00:15:30.670284 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:15:30.671054 kubelet[2778]: E0707 00:15:30.671042 2778 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:15:30.671175 kubelet[2778]: E0707 00:15:30.671168 2778 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-a-19494a3847\" not found" Jul 7 00:15:30.737245 systemd[1]: Created slice kubepods-burstable-pod8e79443bca164fe893b0f33a9ecb97f6.slice - libcontainer container kubepods-burstable-pod8e79443bca164fe893b0f33a9ecb97f6.slice. Jul 7 00:15:30.749996 kubelet[2778]: E0707 00:15:30.749975 2778 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-19494a3847\" not found" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.751583 systemd[1]: Created slice kubepods-burstable-pod8908cc22fc2d3bbd92442a35d163717c.slice - libcontainer container kubepods-burstable-pod8908cc22fc2d3bbd92442a35d163717c.slice. Jul 7 00:15:30.756270 kubelet[2778]: E0707 00:15:30.756252 2778 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-19494a3847\" not found" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.758398 systemd[1]: Created slice kubepods-burstable-pod03272c166770829c63910e769ba0109c.slice - libcontainer container kubepods-burstable-pod03272c166770829c63910e769ba0109c.slice. Jul 7 00:15:30.759696 kubelet[2778]: E0707 00:15:30.759681 2778 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-19494a3847\" not found" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.771638 kubelet[2778]: I0707 00:15:30.771624 2778 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.771913 kubelet[2778]: E0707 00:15:30.771899 2778 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.38:6443/api/v1/nodes\": dial tcp 10.200.4.38:6443: connect: connection refused" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.801343 kubelet[2778]: E0707 00:15:30.801253 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-19494a3847?timeout=10s\": dial tcp 10.200.4.38:6443: connect: connection refused" interval="400ms" Jul 7 00:15:30.908210 kubelet[2778]: I0707 00:15:30.908185 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e79443bca164fe893b0f33a9ecb97f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-19494a3847\" (UID: \"8e79443bca164fe893b0f33a9ecb97f6\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.908326 kubelet[2778]: I0707 00:15:30.908215 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.908326 kubelet[2778]: I0707 00:15:30.908230 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.908326 kubelet[2778]: I0707 00:15:30.908245 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.908326 kubelet[2778]: I0707 00:15:30.908261 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e79443bca164fe893b0f33a9ecb97f6-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-19494a3847\" (UID: \"8e79443bca164fe893b0f33a9ecb97f6\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.908326 kubelet[2778]: I0707 00:15:30.908274 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.908445 kubelet[2778]: I0707 00:15:30.908287 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.908445 kubelet[2778]: I0707 00:15:30.908300 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03272c166770829c63910e769ba0109c-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-19494a3847\" (UID: \"03272c166770829c63910e769ba0109c\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.908445 kubelet[2778]: I0707 00:15:30.908313 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e79443bca164fe893b0f33a9ecb97f6-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-19494a3847\" (UID: \"8e79443bca164fe893b0f33a9ecb97f6\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.973330 kubelet[2778]: I0707 00:15:30.973317 2778 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:30.973569 kubelet[2778]: E0707 00:15:30.973541 2778 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.38:6443/api/v1/nodes\": dial tcp 10.200.4.38:6443: connect: connection refused" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:31.051657 containerd[1801]: time="2025-07-07T00:15:31.051595874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-19494a3847,Uid:8e79443bca164fe893b0f33a9ecb97f6,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:31.056973 containerd[1801]: time="2025-07-07T00:15:31.056950002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-19494a3847,Uid:8908cc22fc2d3bbd92442a35d163717c,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:31.060810 containerd[1801]: time="2025-07-07T00:15:31.060786950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-19494a3847,Uid:03272c166770829c63910e769ba0109c,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:31.202122 kubelet[2778]: E0707 00:15:31.202103 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-19494a3847?timeout=10s\": dial tcp 10.200.4.38:6443: connect: connection refused" interval="800ms" Jul 7 00:15:31.375466 kubelet[2778]: I0707 00:15:31.375411 2778 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:31.376183 kubelet[2778]: E0707 00:15:31.376145 2778 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.38:6443/api/v1/nodes\": dial tcp 10.200.4.38:6443: connect: connection refused" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:31.489442 kubelet[2778]: W0707 00:15:31.489407 2778 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.38:6443: connect: connection refused Jul 7 00:15:31.489504 kubelet[2778]: E0707 00:15:31.489453 2778 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.38:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:31.664644 kubelet[2778]: W0707 00:15:31.664599 2778 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-19494a3847&limit=500&resourceVersion=0": dial tcp 10.200.4.38:6443: connect: connection refused Jul 7 00:15:31.664730 kubelet[2778]: E0707 00:15:31.664654 2778 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-19494a3847&limit=500&resourceVersion=0\": dial tcp 10.200.4.38:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:31.676309 kubelet[2778]: W0707 00:15:31.676289 2778 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.38:6443: connect: connection refused Jul 7 00:15:31.676376 kubelet[2778]: E0707 00:15:31.676317 2778 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.38:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:15:31.722231 containerd[1801]: time="2025-07-07T00:15:31.722128746Z" level=info msg="connecting to shim 9c577b0b0886cd8121e85349ce7ec74d1ce32b8114025fc116d556bed92fe1ce" address="unix:///run/containerd/s/3eb8462dea6cc9b9a5fbdd9461b8eeafcf1e884102d5c9b358b625090431f684" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:31.739206 containerd[1801]: time="2025-07-07T00:15:31.739173438Z" level=info msg="connecting to shim 13b4ee6950f4a80c91ec103d0c5cc5f29fa0f3542d2c5e96e9fef75109674a2d" address="unix:///run/containerd/s/c7be59e0c9c06bd62ba4078ac2b63b34234b79f76d483369136986d20b6092bd" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:31.745622 containerd[1801]: time="2025-07-07T00:15:31.745563820Z" level=info msg="connecting to shim 8c26b36cc3e9492e084c18b66d7a0c9d66f720344cf359a6b948bf48d1e5c6fd" address="unix:///run/containerd/s/024a884c78ebf8e0aa0792dc0f2855f04557714b608cd8184085dff08d36567b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:31.759772 systemd[1]: Started cri-containerd-9c577b0b0886cd8121e85349ce7ec74d1ce32b8114025fc116d556bed92fe1ce.scope - libcontainer container 9c577b0b0886cd8121e85349ce7ec74d1ce32b8114025fc116d556bed92fe1ce. Jul 7 00:15:31.765553 systemd[1]: Started cri-containerd-13b4ee6950f4a80c91ec103d0c5cc5f29fa0f3542d2c5e96e9fef75109674a2d.scope - libcontainer container 13b4ee6950f4a80c91ec103d0c5cc5f29fa0f3542d2c5e96e9fef75109674a2d. Jul 7 00:15:31.773850 systemd[1]: Started cri-containerd-8c26b36cc3e9492e084c18b66d7a0c9d66f720344cf359a6b948bf48d1e5c6fd.scope - libcontainer container 8c26b36cc3e9492e084c18b66d7a0c9d66f720344cf359a6b948bf48d1e5c6fd. Jul 7 00:15:31.830150 containerd[1801]: time="2025-07-07T00:15:31.830122007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-19494a3847,Uid:8e79443bca164fe893b0f33a9ecb97f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"13b4ee6950f4a80c91ec103d0c5cc5f29fa0f3542d2c5e96e9fef75109674a2d\"" Jul 7 00:15:31.833438 containerd[1801]: time="2025-07-07T00:15:31.833397204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-19494a3847,Uid:03272c166770829c63910e769ba0109c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c577b0b0886cd8121e85349ce7ec74d1ce32b8114025fc116d556bed92fe1ce\"" Jul 7 00:15:31.835757 containerd[1801]: time="2025-07-07T00:15:31.835608539Z" level=info msg="CreateContainer within sandbox \"13b4ee6950f4a80c91ec103d0c5cc5f29fa0f3542d2c5e96e9fef75109674a2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:15:31.838166 containerd[1801]: time="2025-07-07T00:15:31.838146472Z" level=info msg="CreateContainer within sandbox \"9c577b0b0886cd8121e85349ce7ec74d1ce32b8114025fc116d556bed92fe1ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:15:31.848024 containerd[1801]: time="2025-07-07T00:15:31.848004679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-19494a3847,Uid:8908cc22fc2d3bbd92442a35d163717c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c26b36cc3e9492e084c18b66d7a0c9d66f720344cf359a6b948bf48d1e5c6fd\"" Jul 7 00:15:31.850964 containerd[1801]: time="2025-07-07T00:15:31.850939946Z" level=info msg="CreateContainer within sandbox \"8c26b36cc3e9492e084c18b66d7a0c9d66f720344cf359a6b948bf48d1e5c6fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:15:31.869641 containerd[1801]: time="2025-07-07T00:15:31.869583529Z" level=info msg="Container 1a9fd8fc607d5c702f129094431d692ef9bf8d90a1b548d43a7c50542a7fd360: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:31.878293 containerd[1801]: time="2025-07-07T00:15:31.878272987Z" level=info msg="Container 74f31c00ec6ace6635f71e6b377cbda918b519298316d0102e2d86dd68d9b9ae: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:31.889548 containerd[1801]: time="2025-07-07T00:15:31.889513232Z" level=info msg="Container 6fd4a51acf4945c26cd5bf8616c3c7a3bf3f497bdd7cb9afe1e6c3a62399c851: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:31.902819 containerd[1801]: time="2025-07-07T00:15:31.902797198Z" level=info msg="CreateContainer within sandbox \"13b4ee6950f4a80c91ec103d0c5cc5f29fa0f3542d2c5e96e9fef75109674a2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a9fd8fc607d5c702f129094431d692ef9bf8d90a1b548d43a7c50542a7fd360\"" Jul 7 00:15:31.903174 containerd[1801]: time="2025-07-07T00:15:31.903156307Z" level=info msg="StartContainer for \"1a9fd8fc607d5c702f129094431d692ef9bf8d90a1b548d43a7c50542a7fd360\"" Jul 7 00:15:31.903847 containerd[1801]: time="2025-07-07T00:15:31.903822131Z" level=info msg="connecting to shim 1a9fd8fc607d5c702f129094431d692ef9bf8d90a1b548d43a7c50542a7fd360" address="unix:///run/containerd/s/c7be59e0c9c06bd62ba4078ac2b63b34234b79f76d483369136986d20b6092bd" protocol=ttrpc version=3 Jul 7 00:15:31.911054 containerd[1801]: time="2025-07-07T00:15:31.910056282Z" level=info msg="CreateContainer within sandbox \"8c26b36cc3e9492e084c18b66d7a0c9d66f720344cf359a6b948bf48d1e5c6fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6fd4a51acf4945c26cd5bf8616c3c7a3bf3f497bdd7cb9afe1e6c3a62399c851\"" Jul 7 00:15:31.911054 containerd[1801]: time="2025-07-07T00:15:31.910476164Z" level=info msg="StartContainer for \"6fd4a51acf4945c26cd5bf8616c3c7a3bf3f497bdd7cb9afe1e6c3a62399c851\"" Jul 7 00:15:31.911518 containerd[1801]: time="2025-07-07T00:15:31.911488708Z" level=info msg="connecting to shim 6fd4a51acf4945c26cd5bf8616c3c7a3bf3f497bdd7cb9afe1e6c3a62399c851" address="unix:///run/containerd/s/024a884c78ebf8e0aa0792dc0f2855f04557714b608cd8184085dff08d36567b" protocol=ttrpc version=3 Jul 7 00:15:31.913319 containerd[1801]: time="2025-07-07T00:15:31.913290258Z" level=info msg="CreateContainer within sandbox \"9c577b0b0886cd8121e85349ce7ec74d1ce32b8114025fc116d556bed92fe1ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"74f31c00ec6ace6635f71e6b377cbda918b519298316d0102e2d86dd68d9b9ae\"" Jul 7 00:15:31.914093 containerd[1801]: time="2025-07-07T00:15:31.913991395Z" level=info msg="StartContainer for \"74f31c00ec6ace6635f71e6b377cbda918b519298316d0102e2d86dd68d9b9ae\"" Jul 7 00:15:31.916646 containerd[1801]: time="2025-07-07T00:15:31.916553229Z" level=info msg="connecting to shim 74f31c00ec6ace6635f71e6b377cbda918b519298316d0102e2d86dd68d9b9ae" address="unix:///run/containerd/s/3eb8462dea6cc9b9a5fbdd9461b8eeafcf1e884102d5c9b358b625090431f684" protocol=ttrpc version=3 Jul 7 00:15:31.919685 systemd[1]: Started cri-containerd-1a9fd8fc607d5c702f129094431d692ef9bf8d90a1b548d43a7c50542a7fd360.scope - libcontainer container 1a9fd8fc607d5c702f129094431d692ef9bf8d90a1b548d43a7c50542a7fd360. Jul 7 00:15:31.932647 systemd[1]: Started cri-containerd-6fd4a51acf4945c26cd5bf8616c3c7a3bf3f497bdd7cb9afe1e6c3a62399c851.scope - libcontainer container 6fd4a51acf4945c26cd5bf8616c3c7a3bf3f497bdd7cb9afe1e6c3a62399c851. Jul 7 00:15:31.935962 systemd[1]: Started cri-containerd-74f31c00ec6ace6635f71e6b377cbda918b519298316d0102e2d86dd68d9b9ae.scope - libcontainer container 74f31c00ec6ace6635f71e6b377cbda918b519298316d0102e2d86dd68d9b9ae. Jul 7 00:15:31.983127 containerd[1801]: time="2025-07-07T00:15:31.983105246Z" level=info msg="StartContainer for \"1a9fd8fc607d5c702f129094431d692ef9bf8d90a1b548d43a7c50542a7fd360\" returns successfully" Jul 7 00:15:32.000260 containerd[1801]: time="2025-07-07T00:15:32.000204318Z" level=info msg="StartContainer for \"6fd4a51acf4945c26cd5bf8616c3c7a3bf3f497bdd7cb9afe1e6c3a62399c851\" returns successfully" Jul 7 00:15:32.002835 kubelet[2778]: E0707 00:15:32.002724 2778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-19494a3847?timeout=10s\": dial tcp 10.200.4.38:6443: connect: connection refused" interval="1.6s" Jul 7 00:15:32.050714 containerd[1801]: time="2025-07-07T00:15:32.050693594Z" level=info msg="StartContainer for \"74f31c00ec6ace6635f71e6b377cbda918b519298316d0102e2d86dd68d9b9ae\" returns successfully" Jul 7 00:15:32.177701 kubelet[2778]: I0707 00:15:32.177647 2778 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:32.639282 kubelet[2778]: E0707 00:15:32.638512 2778 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-19494a3847\" not found" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:32.641129 kubelet[2778]: E0707 00:15:32.641109 2778 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-19494a3847\" not found" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:32.644360 kubelet[2778]: E0707 00:15:32.644337 2778 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-19494a3847\" not found" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.348178 kubelet[2778]: I0707 00:15:33.348141 2778 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.348436 kubelet[2778]: E0707 00:15:33.348316 2778 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-a-19494a3847\": node \"ci-4344.1.1-a-19494a3847\" not found" Jul 7 00:15:33.369767 kubelet[2778]: E0707 00:15:33.369744 2778 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-19494a3847\" not found" Jul 7 00:15:33.470596 kubelet[2778]: E0707 00:15:33.470556 2778 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-19494a3847\" not found" Jul 7 00:15:33.570655 kubelet[2778]: E0707 00:15:33.570628 2778 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-19494a3847\" not found" Jul 7 00:15:33.644973 kubelet[2778]: I0707 00:15:33.644857 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.644973 kubelet[2778]: I0707 00:15:33.644893 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.648373 kubelet[2778]: E0707 00:15:33.648353 2778 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-19494a3847\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.648530 kubelet[2778]: E0707 00:15:33.648353 2778 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-19494a3847\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.700527 kubelet[2778]: I0707 00:15:33.700505 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.701926 kubelet[2778]: E0707 00:15:33.701890 2778 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-19494a3847\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.701926 kubelet[2778]: I0707 00:15:33.701907 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.703304 kubelet[2778]: E0707 00:15:33.703283 2778 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.703304 kubelet[2778]: I0707 00:15:33.703301 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:33.704550 kubelet[2778]: E0707 00:15:33.704511 2778 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-19494a3847\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:34.585955 kubelet[2778]: I0707 00:15:34.585939 2778 apiserver.go:52] "Watching apiserver" Jul 7 00:15:34.607571 kubelet[2778]: I0707 00:15:34.607554 2778 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:15:34.645937 kubelet[2778]: I0707 00:15:34.645923 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:34.652264 kubelet[2778]: W0707 00:15:34.652219 2778 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:15:35.591150 systemd[1]: Reload requested from client PID 3049 ('systemctl') (unit session-9.scope)... Jul 7 00:15:35.591162 systemd[1]: Reloading... Jul 7 00:15:35.668607 zram_generator::config[3095]: No configuration found. Jul 7 00:15:35.745517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:15:35.840601 systemd[1]: Reloading finished in 249 ms. Jul 7 00:15:35.868717 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:35.892189 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:15:35.892391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:35.892432 systemd[1]: kubelet.service: Consumed 699ms CPU time, 129.3M memory peak. Jul 7 00:15:35.893628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:15:36.397267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:15:36.400911 (kubelet)[3162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:15:36.438029 kubelet[3162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:15:36.438029 kubelet[3162]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:15:36.438029 kubelet[3162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:15:36.438270 kubelet[3162]: I0707 00:15:36.438103 3162 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:15:36.442196 kubelet[3162]: I0707 00:15:36.442174 3162 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:15:36.442196 kubelet[3162]: I0707 00:15:36.442190 3162 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:15:36.442401 kubelet[3162]: I0707 00:15:36.442389 3162 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:15:36.443854 kubelet[3162]: I0707 00:15:36.443791 3162 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:15:36.447295 kubelet[3162]: I0707 00:15:36.446831 3162 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:15:36.449949 kubelet[3162]: I0707 00:15:36.449938 3162 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:15:36.452868 kubelet[3162]: I0707 00:15:36.452853 3162 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:15:36.453100 kubelet[3162]: I0707 00:15:36.453073 3162 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:15:36.453867 kubelet[3162]: I0707 00:15:36.453140 3162 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-19494a3847","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:15:36.454050 kubelet[3162]: I0707 00:15:36.454037 3162 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:15:36.454142 kubelet[3162]: I0707 00:15:36.454121 3162 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:15:36.454264 kubelet[3162]: I0707 00:15:36.454258 3162 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:15:36.454428 kubelet[3162]: I0707 00:15:36.454423 3162 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:15:36.454482 kubelet[3162]: I0707 00:15:36.454477 3162 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:15:36.454547 kubelet[3162]: I0707 00:15:36.454543 3162 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:15:36.454580 kubelet[3162]: I0707 00:15:36.454576 3162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:15:36.458917 kubelet[3162]: I0707 00:15:36.458209 3162 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:15:36.459901 kubelet[3162]: I0707 00:15:36.459431 3162 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:15:36.462699 kubelet[3162]: I0707 00:15:36.462686 3162 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:15:36.464073 kubelet[3162]: I0707 00:15:36.462780 3162 server.go:1287] "Started kubelet" Jul 7 00:15:36.466013 kubelet[3162]: I0707 00:15:36.465406 3162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:15:36.474941 kubelet[3162]: I0707 00:15:36.474888 3162 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:15:36.475716 kubelet[3162]: I0707 00:15:36.475699 3162 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:15:36.476892 kubelet[3162]: I0707 00:15:36.476869 3162 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:15:36.477007 kubelet[3162]: I0707 00:15:36.476971 3162 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:15:36.477900 kubelet[3162]: I0707 00:15:36.477887 3162 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:15:36.478131 kubelet[3162]: I0707 00:15:36.478120 3162 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:15:36.478344 kubelet[3162]: I0707 00:15:36.478327 3162 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:15:36.478422 kubelet[3162]: I0707 00:15:36.478414 3162 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:15:36.480299 kubelet[3162]: E0707 00:15:36.480285 3162 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:15:36.481294 kubelet[3162]: I0707 00:15:36.481281 3162 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:15:36.481409 kubelet[3162]: I0707 00:15:36.481392 3162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:15:36.481476 kubelet[3162]: I0707 00:15:36.481464 3162 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:15:36.483284 kubelet[3162]: I0707 00:15:36.483265 3162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:15:36.483346 kubelet[3162]: I0707 00:15:36.483291 3162 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:15:36.483346 kubelet[3162]: I0707 00:15:36.483304 3162 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:15:36.483346 kubelet[3162]: I0707 00:15:36.483310 3162 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:15:36.483407 kubelet[3162]: E0707 00:15:36.483338 3162 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:15:36.486229 kubelet[3162]: I0707 00:15:36.486139 3162 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:15:36.514554 kubelet[3162]: I0707 00:15:36.514543 3162 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:15:36.514622 kubelet[3162]: I0707 00:15:36.514617 3162 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:15:36.514651 kubelet[3162]: I0707 00:15:36.514648 3162 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:15:36.515106 kubelet[3162]: I0707 00:15:36.515092 3162 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:15:36.515190 kubelet[3162]: I0707 00:15:36.515169 3162 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:15:36.515557 kubelet[3162]: I0707 00:15:36.515228 3162 policy_none.go:49] "None policy: Start" Jul 7 00:15:36.515557 kubelet[3162]: I0707 00:15:36.515239 3162 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:15:36.515557 kubelet[3162]: I0707 00:15:36.515250 3162 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:15:36.515557 kubelet[3162]: I0707 00:15:36.515355 3162 state_mem.go:75] "Updated machine memory state" Jul 7 00:15:36.518185 kubelet[3162]: I0707 00:15:36.518175 3162 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:15:36.518377 kubelet[3162]: I0707 00:15:36.518366 3162 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:15:36.518412 kubelet[3162]: I0707 00:15:36.518378 3162 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:15:36.518680 kubelet[3162]: I0707 00:15:36.518672 3162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:15:36.520122 kubelet[3162]: E0707 00:15:36.520111 3162 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:15:36.585247 kubelet[3162]: I0707 00:15:36.584714 3162 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.585247 kubelet[3162]: I0707 00:15:36.584731 3162 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.585247 kubelet[3162]: I0707 00:15:36.584801 3162 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.589923 sudo[3194]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:15:36.590134 sudo[3194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:15:36.593334 kubelet[3162]: W0707 00:15:36.593318 3162 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:15:36.597081 kubelet[3162]: W0707 00:15:36.597063 3162 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:15:36.597730 kubelet[3162]: W0707 00:15:36.597624 3162 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:15:36.597730 kubelet[3162]: E0707 00:15:36.597681 3162 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-19494a3847\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.623783 kubelet[3162]: I0707 00:15:36.623769 3162 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.636636 kubelet[3162]: I0707 00:15:36.636620 3162 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.636697 kubelet[3162]: I0707 00:15:36.636675 3162 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680596 kubelet[3162]: I0707 00:15:36.680313 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03272c166770829c63910e769ba0109c-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-19494a3847\" (UID: \"03272c166770829c63910e769ba0109c\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680596 kubelet[3162]: I0707 00:15:36.680341 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e79443bca164fe893b0f33a9ecb97f6-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-19494a3847\" (UID: \"8e79443bca164fe893b0f33a9ecb97f6\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680596 kubelet[3162]: I0707 00:15:36.680389 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e79443bca164fe893b0f33a9ecb97f6-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-19494a3847\" (UID: \"8e79443bca164fe893b0f33a9ecb97f6\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680596 kubelet[3162]: I0707 00:15:36.680407 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680596 kubelet[3162]: I0707 00:15:36.680423 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680733 kubelet[3162]: I0707 00:15:36.680459 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e79443bca164fe893b0f33a9ecb97f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-19494a3847\" (UID: \"8e79443bca164fe893b0f33a9ecb97f6\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680733 kubelet[3162]: I0707 00:15:36.680471 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680733 kubelet[3162]: I0707 00:15:36.680482 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:36.680733 kubelet[3162]: I0707 00:15:36.680495 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8908cc22fc2d3bbd92442a35d163717c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-19494a3847\" (UID: \"8908cc22fc2d3bbd92442a35d163717c\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" Jul 7 00:15:37.119291 sudo[3194]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:37.458532 kubelet[3162]: I0707 00:15:37.458392 3162 apiserver.go:52] "Watching apiserver" Jul 7 00:15:37.478917 kubelet[3162]: I0707 00:15:37.478898 3162 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:15:37.504049 kubelet[3162]: I0707 00:15:37.504034 3162 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:37.504445 kubelet[3162]: I0707 00:15:37.504422 3162 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:37.518843 kubelet[3162]: W0707 00:15:37.518820 3162 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:15:37.519032 kubelet[3162]: E0707 00:15:37.519018 3162 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-19494a3847\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" Jul 7 00:15:37.519976 kubelet[3162]: W0707 00:15:37.519959 3162 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:15:37.520041 kubelet[3162]: E0707 00:15:37.520032 3162 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-19494a3847\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" Jul 7 00:15:37.544774 kubelet[3162]: I0707 00:15:37.544735 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-a-19494a3847" podStartSLOduration=3.544722066 podStartE2EDuration="3.544722066s" podCreationTimestamp="2025-07-07 00:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:37.535298685 +0000 UTC m=+1.130685497" watchObservedRunningTime="2025-07-07 00:15:37.544722066 +0000 UTC m=+1.140108830" Jul 7 00:15:37.554382 kubelet[3162]: I0707 00:15:37.554344 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-19494a3847" podStartSLOduration=1.554331184 podStartE2EDuration="1.554331184s" podCreationTimestamp="2025-07-07 00:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:37.545192354 +0000 UTC m=+1.140579103" watchObservedRunningTime="2025-07-07 00:15:37.554331184 +0000 UTC m=+1.149717934" Jul 7 00:15:38.560418 sudo[2190]: pam_unix(sudo:session): session closed for user root Jul 7 00:15:38.661803 sshd[2189]: Connection closed by 10.200.16.10 port 48672 Jul 7 00:15:38.662431 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:38.665302 systemd[1]: sshd@6-10.200.4.38:22-10.200.16.10:48672.service: Deactivated successfully. Jul 7 00:15:38.666888 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:15:38.667039 systemd[1]: session-9.scope: Consumed 3.367s CPU time, 268.4M memory peak. Jul 7 00:15:38.668619 systemd-logind[1712]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:15:38.669828 systemd-logind[1712]: Removed session 9. Jul 7 00:15:40.010943 kubelet[3162]: I0707 00:15:40.010909 3162 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:15:40.011220 containerd[1801]: time="2025-07-07T00:15:40.011166402Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:15:40.011388 kubelet[3162]: I0707 00:15:40.011285 3162 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:15:40.688797 kubelet[3162]: I0707 00:15:40.688667 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-a-19494a3847" podStartSLOduration=4.688650711 podStartE2EDuration="4.688650711s" podCreationTimestamp="2025-07-07 00:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:37.554782542 +0000 UTC m=+1.150169291" watchObservedRunningTime="2025-07-07 00:15:40.688650711 +0000 UTC m=+4.284037449" Jul 7 00:15:40.699553 systemd[1]: Created slice kubepods-besteffort-pod675668fc_0a61_482f_97ce_f12256bdbd48.slice - libcontainer container kubepods-besteffort-pod675668fc_0a61_482f_97ce_f12256bdbd48.slice. Jul 7 00:15:40.705923 kubelet[3162]: I0707 00:15:40.705604 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/675668fc-0a61-482f-97ce-f12256bdbd48-xtables-lock\") pod \"kube-proxy-rzvlp\" (UID: \"675668fc-0a61-482f-97ce-f12256bdbd48\") " pod="kube-system/kube-proxy-rzvlp" Jul 7 00:15:40.706003 kubelet[3162]: I0707 00:15:40.705951 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6458n\" (UniqueName: \"kubernetes.io/projected/675668fc-0a61-482f-97ce-f12256bdbd48-kube-api-access-6458n\") pod \"kube-proxy-rzvlp\" (UID: \"675668fc-0a61-482f-97ce-f12256bdbd48\") " pod="kube-system/kube-proxy-rzvlp" Jul 7 00:15:40.706182 kubelet[3162]: I0707 00:15:40.706133 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/675668fc-0a61-482f-97ce-f12256bdbd48-kube-proxy\") pod \"kube-proxy-rzvlp\" (UID: \"675668fc-0a61-482f-97ce-f12256bdbd48\") " pod="kube-system/kube-proxy-rzvlp" Jul 7 00:15:40.706182 kubelet[3162]: I0707 00:15:40.706156 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/675668fc-0a61-482f-97ce-f12256bdbd48-lib-modules\") pod \"kube-proxy-rzvlp\" (UID: \"675668fc-0a61-482f-97ce-f12256bdbd48\") " pod="kube-system/kube-proxy-rzvlp" Jul 7 00:15:40.715204 systemd[1]: Created slice kubepods-burstable-poda8e9ddbb_b1ea_4bd5_a67f_043ca03fa2ad.slice - libcontainer container kubepods-burstable-poda8e9ddbb_b1ea_4bd5_a67f_043ca03fa2ad.slice. Jul 7 00:15:40.806595 kubelet[3162]: I0707 00:15:40.806576 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-config-path\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.806595 kubelet[3162]: I0707 00:15:40.806599 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-etc-cni-netd\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807184 kubelet[3162]: I0707 00:15:40.806614 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-clustermesh-secrets\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807184 kubelet[3162]: I0707 00:15:40.806639 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-lib-modules\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807184 kubelet[3162]: I0707 00:15:40.806653 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-xtables-lock\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807184 kubelet[3162]: I0707 00:15:40.806670 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-hostproc\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807184 kubelet[3162]: I0707 00:15:40.806683 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-host-proc-sys-net\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807184 kubelet[3162]: I0707 00:15:40.806697 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-hubble-tls\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807312 kubelet[3162]: I0707 00:15:40.806710 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58wnq\" (UniqueName: \"kubernetes.io/projected/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-kube-api-access-58wnq\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807312 kubelet[3162]: I0707 00:15:40.806736 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-bpf-maps\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807312 kubelet[3162]: I0707 00:15:40.806752 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-host-proc-sys-kernel\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807312 kubelet[3162]: I0707 00:15:40.806766 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cni-path\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807312 kubelet[3162]: I0707 00:15:40.806791 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-cgroup\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.807312 kubelet[3162]: I0707 00:15:40.806818 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-run\") pod \"cilium-nc6fg\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " pod="kube-system/cilium-nc6fg" Jul 7 00:15:40.809665 kubelet[3162]: E0707 00:15:40.809641 3162 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 00:15:40.809665 kubelet[3162]: E0707 00:15:40.809662 3162 projected.go:194] Error preparing data for projected volume kube-api-access-6458n for pod kube-system/kube-proxy-rzvlp: configmap "kube-root-ca.crt" not found Jul 7 00:15:40.809761 kubelet[3162]: E0707 00:15:40.809711 3162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/675668fc-0a61-482f-97ce-f12256bdbd48-kube-api-access-6458n podName:675668fc-0a61-482f-97ce-f12256bdbd48 nodeName:}" failed. No retries permitted until 2025-07-07 00:15:41.309693342 +0000 UTC m=+4.905080084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6458n" (UniqueName: "kubernetes.io/projected/675668fc-0a61-482f-97ce-f12256bdbd48-kube-api-access-6458n") pod "kube-proxy-rzvlp" (UID: "675668fc-0a61-482f-97ce-f12256bdbd48") : configmap "kube-root-ca.crt" not found Jul 7 00:15:41.019376 containerd[1801]: time="2025-07-07T00:15:41.019295202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nc6fg,Uid:a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:41.088395 systemd[1]: Created slice kubepods-besteffort-pod19fd64a8_a256_4e83_b986_dd60e5e84afd.slice - libcontainer container kubepods-besteffort-pod19fd64a8_a256_4e83_b986_dd60e5e84afd.slice. Jul 7 00:15:41.108697 kubelet[3162]: I0707 00:15:41.108619 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkbls\" (UniqueName: \"kubernetes.io/projected/19fd64a8-a256-4e83-b986-dd60e5e84afd-kube-api-access-mkbls\") pod \"cilium-operator-6c4d7847fc-5rzjz\" (UID: \"19fd64a8-a256-4e83-b986-dd60e5e84afd\") " pod="kube-system/cilium-operator-6c4d7847fc-5rzjz" Jul 7 00:15:41.108697 kubelet[3162]: I0707 00:15:41.108661 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19fd64a8-a256-4e83-b986-dd60e5e84afd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5rzjz\" (UID: \"19fd64a8-a256-4e83-b986-dd60e5e84afd\") " pod="kube-system/cilium-operator-6c4d7847fc-5rzjz" Jul 7 00:15:41.122053 containerd[1801]: time="2025-07-07T00:15:41.122021337Z" level=info msg="connecting to shim 6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42" address="unix:///run/containerd/s/b04a6e9d800ed4e61da2e56c5b82c3a3a104323a72c5360cb84efada2d36ba4d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:41.149640 systemd[1]: Started cri-containerd-6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42.scope - libcontainer container 6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42. Jul 7 00:15:41.167947 containerd[1801]: time="2025-07-07T00:15:41.167926260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nc6fg,Uid:a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\"" Jul 7 00:15:41.169251 containerd[1801]: time="2025-07-07T00:15:41.169193656Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:15:41.393778 containerd[1801]: time="2025-07-07T00:15:41.393719949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5rzjz,Uid:19fd64a8-a256-4e83-b986-dd60e5e84afd,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:41.429077 containerd[1801]: time="2025-07-07T00:15:41.429047280Z" level=info msg="connecting to shim b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f" address="unix:///run/containerd/s/dfecb19b0c7d444f4f7e843c0886bfcf21111816c63f24071283bd059b8f2f8b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:41.446647 systemd[1]: Started cri-containerd-b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f.scope - libcontainer container b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f. Jul 7 00:15:41.481112 containerd[1801]: time="2025-07-07T00:15:41.481094223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5rzjz,Uid:19fd64a8-a256-4e83-b986-dd60e5e84afd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\"" Jul 7 00:15:41.609918 containerd[1801]: time="2025-07-07T00:15:41.609882734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rzvlp,Uid:675668fc-0a61-482f-97ce-f12256bdbd48,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:41.657945 containerd[1801]: time="2025-07-07T00:15:41.657911264Z" level=info msg="connecting to shim 00c80e83700ec373bac7c363e67e1e49a1835c4173bcc87708128e36d90aad9d" address="unix:///run/containerd/s/03637b74cc1e20999813dc91cfabcb84a87c43423db6c9b9463895aaa57b1f6b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:41.677652 systemd[1]: Started cri-containerd-00c80e83700ec373bac7c363e67e1e49a1835c4173bcc87708128e36d90aad9d.scope - libcontainer container 00c80e83700ec373bac7c363e67e1e49a1835c4173bcc87708128e36d90aad9d. Jul 7 00:15:41.697088 containerd[1801]: time="2025-07-07T00:15:41.697056228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rzvlp,Uid:675668fc-0a61-482f-97ce-f12256bdbd48,Namespace:kube-system,Attempt:0,} returns sandbox id \"00c80e83700ec373bac7c363e67e1e49a1835c4173bcc87708128e36d90aad9d\"" Jul 7 00:15:41.699205 containerd[1801]: time="2025-07-07T00:15:41.699105516Z" level=info msg="CreateContainer within sandbox \"00c80e83700ec373bac7c363e67e1e49a1835c4173bcc87708128e36d90aad9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:15:41.718819 containerd[1801]: time="2025-07-07T00:15:41.718799910Z" level=info msg="Container 3c631842a4f63bfa996affdb8b7f0758603322727753666a087f1065faef2409: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:41.735871 containerd[1801]: time="2025-07-07T00:15:41.735848698Z" level=info msg="CreateContainer within sandbox \"00c80e83700ec373bac7c363e67e1e49a1835c4173bcc87708128e36d90aad9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c631842a4f63bfa996affdb8b7f0758603322727753666a087f1065faef2409\"" Jul 7 00:15:41.736873 containerd[1801]: time="2025-07-07T00:15:41.736204176Z" level=info msg="StartContainer for \"3c631842a4f63bfa996affdb8b7f0758603322727753666a087f1065faef2409\"" Jul 7 00:15:41.737442 containerd[1801]: time="2025-07-07T00:15:41.737403632Z" level=info msg="connecting to shim 3c631842a4f63bfa996affdb8b7f0758603322727753666a087f1065faef2409" address="unix:///run/containerd/s/03637b74cc1e20999813dc91cfabcb84a87c43423db6c9b9463895aaa57b1f6b" protocol=ttrpc version=3 Jul 7 00:15:41.752643 systemd[1]: Started cri-containerd-3c631842a4f63bfa996affdb8b7f0758603322727753666a087f1065faef2409.scope - libcontainer container 3c631842a4f63bfa996affdb8b7f0758603322727753666a087f1065faef2409. Jul 7 00:15:41.779299 containerd[1801]: time="2025-07-07T00:15:41.779277905Z" level=info msg="StartContainer for \"3c631842a4f63bfa996affdb8b7f0758603322727753666a087f1065faef2409\" returns successfully" Jul 7 00:15:42.530213 kubelet[3162]: I0707 00:15:42.530081 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rzvlp" podStartSLOduration=2.5300641280000002 podStartE2EDuration="2.530064128s" podCreationTimestamp="2025-07-07 00:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:42.529637624 +0000 UTC m=+6.125024370" watchObservedRunningTime="2025-07-07 00:15:42.530064128 +0000 UTC m=+6.125450872" Jul 7 00:15:45.127325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660283643.mount: Deactivated successfully. Jul 7 00:15:46.427630 containerd[1801]: time="2025-07-07T00:15:46.427591817Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:46.430654 containerd[1801]: time="2025-07-07T00:15:46.430624941Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 00:15:46.433886 containerd[1801]: time="2025-07-07T00:15:46.433846191Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:46.434883 containerd[1801]: time="2025-07-07T00:15:46.434696827Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.265475884s" Jul 7 00:15:46.434883 containerd[1801]: time="2025-07-07T00:15:46.434725212Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 00:15:46.435646 containerd[1801]: time="2025-07-07T00:15:46.435625809Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:15:46.437083 containerd[1801]: time="2025-07-07T00:15:46.436701687Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:15:46.457154 containerd[1801]: time="2025-07-07T00:15:46.456619905Z" level=info msg="Container 2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:46.478998 containerd[1801]: time="2025-07-07T00:15:46.478975927Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\"" Jul 7 00:15:46.479303 containerd[1801]: time="2025-07-07T00:15:46.479281423Z" level=info msg="StartContainer for \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\"" Jul 7 00:15:46.479992 containerd[1801]: time="2025-07-07T00:15:46.479969507Z" level=info msg="connecting to shim 2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388" address="unix:///run/containerd/s/b04a6e9d800ed4e61da2e56c5b82c3a3a104323a72c5360cb84efada2d36ba4d" protocol=ttrpc version=3 Jul 7 00:15:46.501671 systemd[1]: Started cri-containerd-2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388.scope - libcontainer container 2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388. Jul 7 00:15:46.530102 systemd[1]: cri-containerd-2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388.scope: Deactivated successfully. Jul 7 00:15:46.532516 containerd[1801]: time="2025-07-07T00:15:46.532496539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\" id:\"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\" pid:3575 exited_at:{seconds:1751847346 nanos:532005828}" Jul 7 00:15:46.533384 containerd[1801]: time="2025-07-07T00:15:46.533356587Z" level=info msg="received exit event container_id:\"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\" id:\"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\" pid:3575 exited_at:{seconds:1751847346 nanos:532005828}" Jul 7 00:15:46.534807 containerd[1801]: time="2025-07-07T00:15:46.534779369Z" level=info msg="StartContainer for \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\" returns successfully" Jul 7 00:15:46.549878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388-rootfs.mount: Deactivated successfully. Jul 7 00:15:50.530873 containerd[1801]: time="2025-07-07T00:15:50.530810098Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:15:50.561549 containerd[1801]: time="2025-07-07T00:15:50.556368693Z" level=info msg="Container 6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:50.576144 containerd[1801]: time="2025-07-07T00:15:50.576114042Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\"" Jul 7 00:15:50.576900 containerd[1801]: time="2025-07-07T00:15:50.576875007Z" level=info msg="StartContainer for \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\"" Jul 7 00:15:50.577495 containerd[1801]: time="2025-07-07T00:15:50.577467576Z" level=info msg="connecting to shim 6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e" address="unix:///run/containerd/s/b04a6e9d800ed4e61da2e56c5b82c3a3a104323a72c5360cb84efada2d36ba4d" protocol=ttrpc version=3 Jul 7 00:15:50.598288 systemd[1]: Started cri-containerd-6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e.scope - libcontainer container 6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e. Jul 7 00:15:50.634333 containerd[1801]: time="2025-07-07T00:15:50.634309050Z" level=info msg="StartContainer for \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\" returns successfully" Jul 7 00:15:50.644743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:15:50.644954 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:15:50.645414 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:15:50.646784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:15:50.650065 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:15:50.654212 systemd[1]: cri-containerd-6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e.scope: Deactivated successfully. Jul 7 00:15:50.655564 containerd[1801]: time="2025-07-07T00:15:50.654234852Z" level=info msg="received exit event container_id:\"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\" id:\"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\" pid:3623 exited_at:{seconds:1751847350 nanos:653949650}" Jul 7 00:15:50.655564 containerd[1801]: time="2025-07-07T00:15:50.655078761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\" id:\"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\" pid:3623 exited_at:{seconds:1751847350 nanos:653949650}" Jul 7 00:15:50.679449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:15:51.124103 containerd[1801]: time="2025-07-07T00:15:51.124077236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:51.127265 containerd[1801]: time="2025-07-07T00:15:51.127237000Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 00:15:51.130244 containerd[1801]: time="2025-07-07T00:15:51.130211885Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:51.131094 containerd[1801]: time="2025-07-07T00:15:51.130994194Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.695258777s" Jul 7 00:15:51.131094 containerd[1801]: time="2025-07-07T00:15:51.131019453Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 00:15:51.132762 containerd[1801]: time="2025-07-07T00:15:51.132742001Z" level=info msg="CreateContainer within sandbox \"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:15:51.149299 containerd[1801]: time="2025-07-07T00:15:51.149278132Z" level=info msg="Container 000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:51.160722 containerd[1801]: time="2025-07-07T00:15:51.160700102Z" level=info msg="CreateContainer within sandbox \"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\"" Jul 7 00:15:51.161019 containerd[1801]: time="2025-07-07T00:15:51.160973809Z" level=info msg="StartContainer for \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\"" Jul 7 00:15:51.161848 containerd[1801]: time="2025-07-07T00:15:51.161784441Z" level=info msg="connecting to shim 000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f" address="unix:///run/containerd/s/dfecb19b0c7d444f4f7e843c0886bfcf21111816c63f24071283bd059b8f2f8b" protocol=ttrpc version=3 Jul 7 00:15:51.176654 systemd[1]: Started cri-containerd-000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f.scope - libcontainer container 000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f. Jul 7 00:15:51.199663 containerd[1801]: time="2025-07-07T00:15:51.199640134Z" level=info msg="StartContainer for \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" returns successfully" Jul 7 00:15:51.540567 containerd[1801]: time="2025-07-07T00:15:51.540540420Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:15:51.560200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e-rootfs.mount: Deactivated successfully. Jul 7 00:15:51.571681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837212510.mount: Deactivated successfully. Jul 7 00:15:51.574402 containerd[1801]: time="2025-07-07T00:15:51.574147382Z" level=info msg="Container 9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:51.586830 kubelet[3162]: I0707 00:15:51.586780 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5rzjz" podStartSLOduration=0.937013647 podStartE2EDuration="10.586763035s" podCreationTimestamp="2025-07-07 00:15:41 +0000 UTC" firstStartedPulling="2025-07-07 00:15:41.481902567 +0000 UTC m=+5.077289305" lastFinishedPulling="2025-07-07 00:15:51.131651958 +0000 UTC m=+14.727038693" observedRunningTime="2025-07-07 00:15:51.554337097 +0000 UTC m=+15.149723842" watchObservedRunningTime="2025-07-07 00:15:51.586763035 +0000 UTC m=+15.182149784" Jul 7 00:15:51.594094 containerd[1801]: time="2025-07-07T00:15:51.594063690Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\"" Jul 7 00:15:51.594488 containerd[1801]: time="2025-07-07T00:15:51.594466250Z" level=info msg="StartContainer for \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\"" Jul 7 00:15:51.595683 containerd[1801]: time="2025-07-07T00:15:51.595657224Z" level=info msg="connecting to shim 9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222" address="unix:///run/containerd/s/b04a6e9d800ed4e61da2e56c5b82c3a3a104323a72c5360cb84efada2d36ba4d" protocol=ttrpc version=3 Jul 7 00:15:51.617638 systemd[1]: Started cri-containerd-9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222.scope - libcontainer container 9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222. Jul 7 00:15:51.641644 systemd[1]: cri-containerd-9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222.scope: Deactivated successfully. Jul 7 00:15:51.645002 containerd[1801]: time="2025-07-07T00:15:51.644938638Z" level=info msg="received exit event container_id:\"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\" id:\"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\" pid:3716 exited_at:{seconds:1751847351 nanos:644658871}" Jul 7 00:15:51.645066 containerd[1801]: time="2025-07-07T00:15:51.645042040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\" id:\"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\" pid:3716 exited_at:{seconds:1751847351 nanos:644658871}" Jul 7 00:15:51.646553 containerd[1801]: time="2025-07-07T00:15:51.645833814Z" level=info msg="StartContainer for \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\" returns successfully" Jul 7 00:15:51.665336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222-rootfs.mount: Deactivated successfully. Jul 7 00:15:52.541549 containerd[1801]: time="2025-07-07T00:15:52.541173822Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:15:52.560447 containerd[1801]: time="2025-07-07T00:15:52.560420963Z" level=info msg="Container 930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:52.574279 containerd[1801]: time="2025-07-07T00:15:52.574251492Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\"" Jul 7 00:15:52.574577 containerd[1801]: time="2025-07-07T00:15:52.574553562Z" level=info msg="StartContainer for \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\"" Jul 7 00:15:52.575511 containerd[1801]: time="2025-07-07T00:15:52.575283518Z" level=info msg="connecting to shim 930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b" address="unix:///run/containerd/s/b04a6e9d800ed4e61da2e56c5b82c3a3a104323a72c5360cb84efada2d36ba4d" protocol=ttrpc version=3 Jul 7 00:15:52.599669 systemd[1]: Started cri-containerd-930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b.scope - libcontainer container 930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b. Jul 7 00:15:52.616335 systemd[1]: cri-containerd-930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b.scope: Deactivated successfully. Jul 7 00:15:52.618011 containerd[1801]: time="2025-07-07T00:15:52.617974063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\" id:\"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\" pid:3756 exited_at:{seconds:1751847352 nanos:617786683}" Jul 7 00:15:52.622683 containerd[1801]: time="2025-07-07T00:15:52.622579884Z" level=info msg="received exit event container_id:\"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\" id:\"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\" pid:3756 exited_at:{seconds:1751847352 nanos:617786683}" Jul 7 00:15:52.627214 containerd[1801]: time="2025-07-07T00:15:52.627191503Z" level=info msg="StartContainer for \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\" returns successfully" Jul 7 00:15:52.636038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b-rootfs.mount: Deactivated successfully. Jul 7 00:15:53.544183 containerd[1801]: time="2025-07-07T00:15:53.544154200Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:15:53.564894 containerd[1801]: time="2025-07-07T00:15:53.564870496Z" level=info msg="Container 6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:53.578180 containerd[1801]: time="2025-07-07T00:15:53.578159909Z" level=info msg="CreateContainer within sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\"" Jul 7 00:15:53.579186 containerd[1801]: time="2025-07-07T00:15:53.578499481Z" level=info msg="StartContainer for \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\"" Jul 7 00:15:53.579374 containerd[1801]: time="2025-07-07T00:15:53.579327797Z" level=info msg="connecting to shim 6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177" address="unix:///run/containerd/s/b04a6e9d800ed4e61da2e56c5b82c3a3a104323a72c5360cb84efada2d36ba4d" protocol=ttrpc version=3 Jul 7 00:15:53.600653 systemd[1]: Started cri-containerd-6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177.scope - libcontainer container 6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177. Jul 7 00:15:53.629515 containerd[1801]: time="2025-07-07T00:15:53.629482991Z" level=info msg="StartContainer for \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" returns successfully" Jul 7 00:15:53.688779 containerd[1801]: time="2025-07-07T00:15:53.688753582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" id:\"25c59f7680263ab946fddf8379c95314304808648e0af5156a136411782e16a2\" pid:3826 exited_at:{seconds:1751847353 nanos:688424524}" Jul 7 00:15:53.714924 kubelet[3162]: I0707 00:15:53.714904 3162 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:15:53.754207 systemd[1]: Created slice kubepods-burstable-poda21170e4_354b_4cbc_bbca_e48fefd257ee.slice - libcontainer container kubepods-burstable-poda21170e4_354b_4cbc_bbca_e48fefd257ee.slice. Jul 7 00:15:53.763893 systemd[1]: Created slice kubepods-burstable-podab88c887_6469_4e6e_8419_c0cdd80fd765.slice - libcontainer container kubepods-burstable-podab88c887_6469_4e6e_8419_c0cdd80fd765.slice. Jul 7 00:15:53.791768 kubelet[3162]: I0707 00:15:53.791741 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab88c887-6469-4e6e-8419-c0cdd80fd765-config-volume\") pod \"coredns-668d6bf9bc-k8htz\" (UID: \"ab88c887-6469-4e6e-8419-c0cdd80fd765\") " pod="kube-system/coredns-668d6bf9bc-k8htz" Jul 7 00:15:53.791844 kubelet[3162]: I0707 00:15:53.791777 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jxfd\" (UniqueName: \"kubernetes.io/projected/ab88c887-6469-4e6e-8419-c0cdd80fd765-kube-api-access-9jxfd\") pod \"coredns-668d6bf9bc-k8htz\" (UID: \"ab88c887-6469-4e6e-8419-c0cdd80fd765\") " pod="kube-system/coredns-668d6bf9bc-k8htz" Jul 7 00:15:53.791844 kubelet[3162]: I0707 00:15:53.791796 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvgq\" (UniqueName: \"kubernetes.io/projected/a21170e4-354b-4cbc-bbca-e48fefd257ee-kube-api-access-szvgq\") pod \"coredns-668d6bf9bc-htjqc\" (UID: \"a21170e4-354b-4cbc-bbca-e48fefd257ee\") " pod="kube-system/coredns-668d6bf9bc-htjqc" Jul 7 00:15:53.791844 kubelet[3162]: I0707 00:15:53.791813 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a21170e4-354b-4cbc-bbca-e48fefd257ee-config-volume\") pod \"coredns-668d6bf9bc-htjqc\" (UID: \"a21170e4-354b-4cbc-bbca-e48fefd257ee\") " pod="kube-system/coredns-668d6bf9bc-htjqc" Jul 7 00:15:54.060213 containerd[1801]: time="2025-07-07T00:15:54.060188993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-htjqc,Uid:a21170e4-354b-4cbc-bbca-e48fefd257ee,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:54.066674 containerd[1801]: time="2025-07-07T00:15:54.066652020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8htz,Uid:ab88c887-6469-4e6e-8419-c0cdd80fd765,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:54.560769 kubelet[3162]: I0707 00:15:54.560426 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nc6fg" podStartSLOduration=9.293688234 podStartE2EDuration="14.560411165s" podCreationTimestamp="2025-07-07 00:15:40 +0000 UTC" firstStartedPulling="2025-07-07 00:15:41.168809017 +0000 UTC m=+4.764195762" lastFinishedPulling="2025-07-07 00:15:46.43553195 +0000 UTC m=+10.030918693" observedRunningTime="2025-07-07 00:15:54.559970741 +0000 UTC m=+18.155357492" watchObservedRunningTime="2025-07-07 00:15:54.560411165 +0000 UTC m=+18.155797971" Jul 7 00:15:55.623880 systemd-networkd[1369]: cilium_host: Link UP Jul 7 00:15:55.624368 systemd-networkd[1369]: cilium_net: Link UP Jul 7 00:15:55.625550 systemd-networkd[1369]: cilium_net: Gained carrier Jul 7 00:15:55.625724 systemd-networkd[1369]: cilium_host: Gained carrier Jul 7 00:15:55.780964 systemd-networkd[1369]: cilium_vxlan: Link UP Jul 7 00:15:55.780968 systemd-networkd[1369]: cilium_vxlan: Gained carrier Jul 7 00:15:55.911608 systemd-networkd[1369]: cilium_net: Gained IPv6LL Jul 7 00:15:55.951552 kernel: NET: Registered PF_ALG protocol family Jul 7 00:15:56.405843 systemd-networkd[1369]: lxc_health: Link UP Jul 7 00:15:56.410718 systemd-networkd[1369]: cilium_host: Gained IPv6LL Jul 7 00:15:56.410897 systemd-networkd[1369]: lxc_health: Gained carrier Jul 7 00:15:56.583845 systemd-networkd[1369]: lxc340d6842c67e: Link UP Jul 7 00:15:56.591714 kernel: eth0: renamed from tmp1690f Jul 7 00:15:56.595566 systemd-networkd[1369]: lxc340d6842c67e: Gained carrier Jul 7 00:15:56.602260 systemd-networkd[1369]: lxc251ec5083adb: Link UP Jul 7 00:15:56.613777 kernel: eth0: renamed from tmp7d0ce Jul 7 00:15:56.614942 systemd-networkd[1369]: lxc251ec5083adb: Gained carrier Jul 7 00:15:57.367605 systemd-networkd[1369]: cilium_vxlan: Gained IPv6LL Jul 7 00:15:57.751661 systemd-networkd[1369]: lxc251ec5083adb: Gained IPv6LL Jul 7 00:15:57.815607 systemd-networkd[1369]: lxc_health: Gained IPv6LL Jul 7 00:15:57.816211 systemd-networkd[1369]: lxc340d6842c67e: Gained IPv6LL Jul 7 00:15:59.218123 containerd[1801]: time="2025-07-07T00:15:59.216495375Z" level=info msg="connecting to shim 7d0ce44b00ff6b24b618d82f526562b78f29e9432209fc86bbbfeece8acd29ac" address="unix:///run/containerd/s/1b9d59b3fd6058b968fea62638f065f6a35c8bd1096a141f031aa8294a33fc99" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:59.232888 containerd[1801]: time="2025-07-07T00:15:59.232832347Z" level=info msg="connecting to shim 1690ffd9ca8022ab0e8aee6a65ae1953d9418cba3ff1e140f773980c55abb73c" address="unix:///run/containerd/s/cd261730ba4dbf6d0ceccffda85dc21780e9fd76bdc63b03953edba56d20213c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:59.253645 systemd[1]: Started cri-containerd-7d0ce44b00ff6b24b618d82f526562b78f29e9432209fc86bbbfeece8acd29ac.scope - libcontainer container 7d0ce44b00ff6b24b618d82f526562b78f29e9432209fc86bbbfeece8acd29ac. Jul 7 00:15:59.256463 systemd[1]: Started cri-containerd-1690ffd9ca8022ab0e8aee6a65ae1953d9418cba3ff1e140f773980c55abb73c.scope - libcontainer container 1690ffd9ca8022ab0e8aee6a65ae1953d9418cba3ff1e140f773980c55abb73c. Jul 7 00:15:59.304543 containerd[1801]: time="2025-07-07T00:15:59.304508797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8htz,Uid:ab88c887-6469-4e6e-8419-c0cdd80fd765,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d0ce44b00ff6b24b618d82f526562b78f29e9432209fc86bbbfeece8acd29ac\"" Jul 7 00:15:59.306348 containerd[1801]: time="2025-07-07T00:15:59.306322604Z" level=info msg="CreateContainer within sandbox \"7d0ce44b00ff6b24b618d82f526562b78f29e9432209fc86bbbfeece8acd29ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:15:59.307941 containerd[1801]: time="2025-07-07T00:15:59.307921978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-htjqc,Uid:a21170e4-354b-4cbc-bbca-e48fefd257ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"1690ffd9ca8022ab0e8aee6a65ae1953d9418cba3ff1e140f773980c55abb73c\"" Jul 7 00:15:59.309994 containerd[1801]: time="2025-07-07T00:15:59.309924479Z" level=info msg="CreateContainer within sandbox \"1690ffd9ca8022ab0e8aee6a65ae1953d9418cba3ff1e140f773980c55abb73c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:15:59.329767 containerd[1801]: time="2025-07-07T00:15:59.329744176Z" level=info msg="Container 05ff237eef8ab1067b7619b8062c1515b98df69b82bafaff673d3ec86d9f6905: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:59.332709 containerd[1801]: time="2025-07-07T00:15:59.332687774Z" level=info msg="Container 94018d01124438a1487d1c974b66a1c9d42e63621f3a147d6d32c4389243a86b: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:59.348979 containerd[1801]: time="2025-07-07T00:15:59.348957616Z" level=info msg="CreateContainer within sandbox \"7d0ce44b00ff6b24b618d82f526562b78f29e9432209fc86bbbfeece8acd29ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05ff237eef8ab1067b7619b8062c1515b98df69b82bafaff673d3ec86d9f6905\"" Jul 7 00:15:59.349261 containerd[1801]: time="2025-07-07T00:15:59.349246022Z" level=info msg="StartContainer for \"05ff237eef8ab1067b7619b8062c1515b98df69b82bafaff673d3ec86d9f6905\"" Jul 7 00:15:59.350076 containerd[1801]: time="2025-07-07T00:15:59.350039778Z" level=info msg="connecting to shim 05ff237eef8ab1067b7619b8062c1515b98df69b82bafaff673d3ec86d9f6905" address="unix:///run/containerd/s/1b9d59b3fd6058b968fea62638f065f6a35c8bd1096a141f031aa8294a33fc99" protocol=ttrpc version=3 Jul 7 00:15:59.351169 containerd[1801]: time="2025-07-07T00:15:59.351103878Z" level=info msg="CreateContainer within sandbox \"1690ffd9ca8022ab0e8aee6a65ae1953d9418cba3ff1e140f773980c55abb73c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94018d01124438a1487d1c974b66a1c9d42e63621f3a147d6d32c4389243a86b\"" Jul 7 00:15:59.351466 containerd[1801]: time="2025-07-07T00:15:59.351444954Z" level=info msg="StartContainer for \"94018d01124438a1487d1c974b66a1c9d42e63621f3a147d6d32c4389243a86b\"" Jul 7 00:15:59.352594 containerd[1801]: time="2025-07-07T00:15:59.352158930Z" level=info msg="connecting to shim 94018d01124438a1487d1c974b66a1c9d42e63621f3a147d6d32c4389243a86b" address="unix:///run/containerd/s/cd261730ba4dbf6d0ceccffda85dc21780e9fd76bdc63b03953edba56d20213c" protocol=ttrpc version=3 Jul 7 00:15:59.363663 systemd[1]: Started cri-containerd-05ff237eef8ab1067b7619b8062c1515b98df69b82bafaff673d3ec86d9f6905.scope - libcontainer container 05ff237eef8ab1067b7619b8062c1515b98df69b82bafaff673d3ec86d9f6905. Jul 7 00:15:59.367217 systemd[1]: Started cri-containerd-94018d01124438a1487d1c974b66a1c9d42e63621f3a147d6d32c4389243a86b.scope - libcontainer container 94018d01124438a1487d1c974b66a1c9d42e63621f3a147d6d32c4389243a86b. Jul 7 00:15:59.399846 containerd[1801]: time="2025-07-07T00:15:59.399465650Z" level=info msg="StartContainer for \"05ff237eef8ab1067b7619b8062c1515b98df69b82bafaff673d3ec86d9f6905\" returns successfully" Jul 7 00:15:59.399964 containerd[1801]: time="2025-07-07T00:15:59.399929695Z" level=info msg="StartContainer for \"94018d01124438a1487d1c974b66a1c9d42e63621f3a147d6d32c4389243a86b\" returns successfully" Jul 7 00:15:59.575879 kubelet[3162]: I0707 00:15:59.575790 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k8htz" podStartSLOduration=18.575773406 podStartE2EDuration="18.575773406s" podCreationTimestamp="2025-07-07 00:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:59.574466889 +0000 UTC m=+23.169853630" watchObservedRunningTime="2025-07-07 00:15:59.575773406 +0000 UTC m=+23.171160152" Jul 7 00:15:59.589061 kubelet[3162]: I0707 00:15:59.588519 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-htjqc" podStartSLOduration=18.588503973 podStartE2EDuration="18.588503973s" podCreationTimestamp="2025-07-07 00:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:59.588349424 +0000 UTC m=+23.183736231" watchObservedRunningTime="2025-07-07 00:15:59.588503973 +0000 UTC m=+23.183890723" Jul 7 00:16:07.959936 kubelet[3162]: I0707 00:16:07.959907 3162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:17:19.142172 systemd[1]: Started sshd@7-10.200.4.38:22-10.200.16.10:53272.service - OpenSSH per-connection server daemon (10.200.16.10:53272). Jul 7 00:17:19.735003 sshd[4471]: Accepted publickey for core from 10.200.16.10 port 53272 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:19.736126 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:19.742823 systemd-logind[1712]: New session 10 of user core. Jul 7 00:17:19.746660 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:17:20.208593 sshd[4473]: Connection closed by 10.200.16.10 port 53272 Jul 7 00:17:20.208946 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:20.210857 systemd[1]: sshd@7-10.200.4.38:22-10.200.16.10:53272.service: Deactivated successfully. Jul 7 00:17:20.212363 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:17:20.213396 systemd-logind[1712]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:17:20.214350 systemd-logind[1712]: Removed session 10. Jul 7 00:17:25.321260 systemd[1]: Started sshd@8-10.200.4.38:22-10.200.16.10:41554.service - OpenSSH per-connection server daemon (10.200.16.10:41554). Jul 7 00:17:25.919113 sshd[4486]: Accepted publickey for core from 10.200.16.10 port 41554 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:25.920054 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:25.923873 systemd-logind[1712]: New session 11 of user core. Jul 7 00:17:25.928662 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:17:26.376396 sshd[4488]: Connection closed by 10.200.16.10 port 41554 Jul 7 00:17:26.376978 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:26.379337 systemd[1]: sshd@8-10.200.4.38:22-10.200.16.10:41554.service: Deactivated successfully. Jul 7 00:17:26.380728 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:17:26.381415 systemd-logind[1712]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:17:26.382414 systemd-logind[1712]: Removed session 11. Jul 7 00:17:31.494055 systemd[1]: Started sshd@9-10.200.4.38:22-10.200.16.10:48790.service - OpenSSH per-connection server daemon (10.200.16.10:48790). Jul 7 00:17:32.094432 sshd[4502]: Accepted publickey for core from 10.200.16.10 port 48790 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:32.095360 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:32.099116 systemd-logind[1712]: New session 12 of user core. Jul 7 00:17:32.102654 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:17:32.571590 sshd[4504]: Connection closed by 10.200.16.10 port 48790 Jul 7 00:17:32.573875 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:32.576129 systemd[1]: sshd@9-10.200.4.38:22-10.200.16.10:48790.service: Deactivated successfully. Jul 7 00:17:32.577688 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:17:32.578375 systemd-logind[1712]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:17:32.579298 systemd-logind[1712]: Removed session 12. Jul 7 00:17:37.676995 systemd[1]: Started sshd@10-10.200.4.38:22-10.200.16.10:48796.service - OpenSSH per-connection server daemon (10.200.16.10:48796). Jul 7 00:17:38.267277 sshd[4519]: Accepted publickey for core from 10.200.16.10 port 48796 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:38.268224 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:38.272378 systemd-logind[1712]: New session 13 of user core. Jul 7 00:17:38.275675 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:17:38.727892 sshd[4521]: Connection closed by 10.200.16.10 port 48796 Jul 7 00:17:38.727789 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:38.731102 systemd[1]: sshd@10-10.200.4.38:22-10.200.16.10:48796.service: Deactivated successfully. Jul 7 00:17:38.733993 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:17:38.735408 systemd-logind[1712]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:17:38.737140 systemd-logind[1712]: Removed session 13. Jul 7 00:17:38.835671 systemd[1]: Started sshd@11-10.200.4.38:22-10.200.16.10:48804.service - OpenSSH per-connection server daemon (10.200.16.10:48804). Jul 7 00:17:39.437722 sshd[4534]: Accepted publickey for core from 10.200.16.10 port 48804 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:39.438596 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:39.442136 systemd-logind[1712]: New session 14 of user core. Jul 7 00:17:39.446649 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:17:39.925820 sshd[4536]: Connection closed by 10.200.16.10 port 48804 Jul 7 00:17:39.926165 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:39.928436 systemd[1]: sshd@11-10.200.4.38:22-10.200.16.10:48804.service: Deactivated successfully. Jul 7 00:17:39.929759 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:17:39.930324 systemd-logind[1712]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:17:39.931328 systemd-logind[1712]: Removed session 14. Jul 7 00:17:40.034972 systemd[1]: Started sshd@12-10.200.4.38:22-10.200.16.10:49818.service - OpenSSH per-connection server daemon (10.200.16.10:49818). Jul 7 00:17:40.643581 sshd[4546]: Accepted publickey for core from 10.200.16.10 port 49818 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:40.644393 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:40.647810 systemd-logind[1712]: New session 15 of user core. Jul 7 00:17:40.651661 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:17:41.110698 sshd[4548]: Connection closed by 10.200.16.10 port 49818 Jul 7 00:17:41.111048 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:41.113229 systemd[1]: sshd@12-10.200.4.38:22-10.200.16.10:49818.service: Deactivated successfully. Jul 7 00:17:41.114658 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:17:41.115220 systemd-logind[1712]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:17:41.116173 systemd-logind[1712]: Removed session 15. Jul 7 00:17:46.220249 systemd[1]: Started sshd@13-10.200.4.38:22-10.200.16.10:49824.service - OpenSSH per-connection server daemon (10.200.16.10:49824). Jul 7 00:17:46.822187 sshd[4562]: Accepted publickey for core from 10.200.16.10 port 49824 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:46.823110 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:46.826885 systemd-logind[1712]: New session 16 of user core. Jul 7 00:17:46.837637 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:17:47.283498 sshd[4564]: Connection closed by 10.200.16.10 port 49824 Jul 7 00:17:47.283856 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:47.286237 systemd[1]: sshd@13-10.200.4.38:22-10.200.16.10:49824.service: Deactivated successfully. Jul 7 00:17:47.287616 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:17:47.288216 systemd-logind[1712]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:17:47.289296 systemd-logind[1712]: Removed session 16. Jul 7 00:17:47.391847 systemd[1]: Started sshd@14-10.200.4.38:22-10.200.16.10:49834.service - OpenSSH per-connection server daemon (10.200.16.10:49834). Jul 7 00:17:47.988643 sshd[4576]: Accepted publickey for core from 10.200.16.10 port 49834 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:47.989547 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:47.992556 systemd-logind[1712]: New session 17 of user core. Jul 7 00:17:47.997652 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:17:48.487055 sshd[4578]: Connection closed by 10.200.16.10 port 49834 Jul 7 00:17:48.487384 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:48.489350 systemd[1]: sshd@14-10.200.4.38:22-10.200.16.10:49834.service: Deactivated successfully. Jul 7 00:17:48.491729 systemd-logind[1712]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:17:48.491746 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:17:48.492870 systemd-logind[1712]: Removed session 17. Jul 7 00:17:48.599678 systemd[1]: Started sshd@15-10.200.4.38:22-10.200.16.10:49836.service - OpenSSH per-connection server daemon (10.200.16.10:49836). Jul 7 00:17:49.195903 sshd[4588]: Accepted publickey for core from 10.200.16.10 port 49836 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:49.196747 sshd-session[4588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:49.199971 systemd-logind[1712]: New session 18 of user core. Jul 7 00:17:49.203639 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:17:50.142200 sshd[4590]: Connection closed by 10.200.16.10 port 49836 Jul 7 00:17:50.142622 sshd-session[4588]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:50.144589 systemd[1]: sshd@15-10.200.4.38:22-10.200.16.10:49836.service: Deactivated successfully. Jul 7 00:17:50.146054 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:17:50.147590 systemd-logind[1712]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:17:50.148309 systemd-logind[1712]: Removed session 18. Jul 7 00:17:50.243785 systemd[1]: Started sshd@16-10.200.4.38:22-10.200.16.10:47728.service - OpenSSH per-connection server daemon (10.200.16.10:47728). Jul 7 00:17:50.838772 sshd[4607]: Accepted publickey for core from 10.200.16.10 port 47728 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:50.839640 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:50.842586 systemd-logind[1712]: New session 19 of user core. Jul 7 00:17:50.852623 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:17:51.376526 sshd[4609]: Connection closed by 10.200.16.10 port 47728 Jul 7 00:17:51.376876 sshd-session[4607]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:51.378774 systemd[1]: sshd@16-10.200.4.38:22-10.200.16.10:47728.service: Deactivated successfully. Jul 7 00:17:51.380087 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:17:51.381147 systemd-logind[1712]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:17:51.382202 systemd-logind[1712]: Removed session 19. Jul 7 00:17:51.479761 systemd[1]: Started sshd@17-10.200.4.38:22-10.200.16.10:47738.service - OpenSSH per-connection server daemon (10.200.16.10:47738). Jul 7 00:17:52.071374 sshd[4618]: Accepted publickey for core from 10.200.16.10 port 47738 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:52.072225 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:52.075776 systemd-logind[1712]: New session 20 of user core. Jul 7 00:17:52.080644 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:17:52.535838 sshd[4620]: Connection closed by 10.200.16.10 port 47738 Jul 7 00:17:52.536549 sshd-session[4618]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:52.538637 systemd[1]: sshd@17-10.200.4.38:22-10.200.16.10:47738.service: Deactivated successfully. Jul 7 00:17:52.540002 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:17:52.541500 systemd-logind[1712]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:17:52.542221 systemd-logind[1712]: Removed session 20. Jul 7 00:17:57.657260 systemd[1]: Started sshd@18-10.200.4.38:22-10.200.16.10:47752.service - OpenSSH per-connection server daemon (10.200.16.10:47752). Jul 7 00:17:58.253034 sshd[4634]: Accepted publickey for core from 10.200.16.10 port 47752 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:17:58.254006 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:58.258008 systemd-logind[1712]: New session 21 of user core. Jul 7 00:17:58.262673 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:17:58.718478 sshd[4636]: Connection closed by 10.200.16.10 port 47752 Jul 7 00:17:58.718851 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:58.720768 systemd[1]: sshd@18-10.200.4.38:22-10.200.16.10:47752.service: Deactivated successfully. Jul 7 00:17:58.722106 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:17:58.723193 systemd-logind[1712]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:17:58.724250 systemd-logind[1712]: Removed session 21. Jul 7 00:18:03.825030 systemd[1]: Started sshd@19-10.200.4.38:22-10.200.16.10:48792.service - OpenSSH per-connection server daemon (10.200.16.10:48792). Jul 7 00:18:04.422266 sshd[4648]: Accepted publickey for core from 10.200.16.10 port 48792 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:18:04.423159 sshd-session[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:04.426875 systemd-logind[1712]: New session 22 of user core. Jul 7 00:18:04.429639 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:18:04.898221 sshd[4650]: Connection closed by 10.200.16.10 port 48792 Jul 7 00:18:04.898591 sshd-session[4648]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:04.901065 systemd[1]: sshd@19-10.200.4.38:22-10.200.16.10:48792.service: Deactivated successfully. Jul 7 00:18:04.902435 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:18:04.903126 systemd-logind[1712]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:18:04.904091 systemd-logind[1712]: Removed session 22. Jul 7 00:18:10.005059 systemd[1]: Started sshd@20-10.200.4.38:22-10.200.16.10:49036.service - OpenSSH per-connection server daemon (10.200.16.10:49036). Jul 7 00:18:10.600329 sshd[4662]: Accepted publickey for core from 10.200.16.10 port 49036 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:18:10.601220 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:10.604891 systemd-logind[1712]: New session 23 of user core. Jul 7 00:18:10.613656 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:18:11.067977 sshd[4664]: Connection closed by 10.200.16.10 port 49036 Jul 7 00:18:11.068335 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:11.070672 systemd[1]: sshd@20-10.200.4.38:22-10.200.16.10:49036.service: Deactivated successfully. Jul 7 00:18:11.072099 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:18:11.072728 systemd-logind[1712]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:18:11.073642 systemd-logind[1712]: Removed session 23. Jul 7 00:18:11.177703 systemd[1]: Started sshd@21-10.200.4.38:22-10.200.16.10:49038.service - OpenSSH per-connection server daemon (10.200.16.10:49038). Jul 7 00:18:11.775382 sshd[4676]: Accepted publickey for core from 10.200.16.10 port 49038 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:18:11.776229 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:11.779764 systemd-logind[1712]: New session 24 of user core. Jul 7 00:18:11.786655 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:18:13.377864 containerd[1801]: time="2025-07-07T00:18:13.377789598Z" level=info msg="StopContainer for \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" with timeout 30 (s)" Jul 7 00:18:13.378641 containerd[1801]: time="2025-07-07T00:18:13.378614853Z" level=info msg="Stop container \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" with signal terminated" Jul 7 00:18:13.390824 systemd[1]: cri-containerd-000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f.scope: Deactivated successfully. Jul 7 00:18:13.393296 containerd[1801]: time="2025-07-07T00:18:13.393264524Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:18:13.393690 containerd[1801]: time="2025-07-07T00:18:13.393637177Z" level=info msg="received exit event container_id:\"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" id:\"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" pid:3684 exited_at:{seconds:1751847493 nanos:392428392}" Jul 7 00:18:13.395180 containerd[1801]: time="2025-07-07T00:18:13.395089904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" id:\"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" pid:3684 exited_at:{seconds:1751847493 nanos:392428392}" Jul 7 00:18:13.398186 containerd[1801]: time="2025-07-07T00:18:13.398161160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" id:\"285fded4dee66841173171f761dc77e8d4dd7f36ed7c2266f9d5566a69ce44bf\" pid:4699 exited_at:{seconds:1751847493 nanos:395643579}" Jul 7 00:18:13.400002 containerd[1801]: time="2025-07-07T00:18:13.399985038Z" level=info msg="StopContainer for \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" with timeout 2 (s)" Jul 7 00:18:13.401080 containerd[1801]: time="2025-07-07T00:18:13.401010310Z" level=info msg="Stop container \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" with signal terminated" Jul 7 00:18:13.409824 systemd-networkd[1369]: lxc_health: Link DOWN Jul 7 00:18:13.409830 systemd-networkd[1369]: lxc_health: Lost carrier Jul 7 00:18:13.423433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f-rootfs.mount: Deactivated successfully. Jul 7 00:18:13.424938 systemd[1]: cri-containerd-6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177.scope: Deactivated successfully. Jul 7 00:18:13.425354 systemd[1]: cri-containerd-6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177.scope: Consumed 4.709s CPU time, 124M memory peak, 128K read from disk, 13.6M written to disk. Jul 7 00:18:13.426144 containerd[1801]: time="2025-07-07T00:18:13.426127072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" id:\"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" pid:3794 exited_at:{seconds:1751847493 nanos:424436746}" Jul 7 00:18:13.426317 containerd[1801]: time="2025-07-07T00:18:13.426285289Z" level=info msg="received exit event container_id:\"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" id:\"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" pid:3794 exited_at:{seconds:1751847493 nanos:424436746}" Jul 7 00:18:13.438327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177-rootfs.mount: Deactivated successfully. Jul 7 00:18:13.545154 containerd[1801]: time="2025-07-07T00:18:13.545130439Z" level=info msg="StopContainer for \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" returns successfully" Jul 7 00:18:13.545675 containerd[1801]: time="2025-07-07T00:18:13.545628316Z" level=info msg="StopPodSandbox for \"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\"" Jul 7 00:18:13.545782 containerd[1801]: time="2025-07-07T00:18:13.545769296Z" level=info msg="Container to stop \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:18:13.546890 containerd[1801]: time="2025-07-07T00:18:13.546866086Z" level=info msg="StopContainer for \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" returns successfully" Jul 7 00:18:13.547363 containerd[1801]: time="2025-07-07T00:18:13.547347862Z" level=info msg="StopPodSandbox for \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\"" Jul 7 00:18:13.547518 containerd[1801]: time="2025-07-07T00:18:13.547506573Z" level=info msg="Container to stop \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:18:13.547598 containerd[1801]: time="2025-07-07T00:18:13.547588533Z" level=info msg="Container to stop \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:18:13.547643 containerd[1801]: time="2025-07-07T00:18:13.547634997Z" level=info msg="Container to stop \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:18:13.547751 containerd[1801]: time="2025-07-07T00:18:13.547681596Z" level=info msg="Container to stop \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:18:13.547751 containerd[1801]: time="2025-07-07T00:18:13.547691060Z" level=info msg="Container to stop \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:18:13.552516 systemd[1]: cri-containerd-b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f.scope: Deactivated successfully. Jul 7 00:18:13.553984 systemd[1]: cri-containerd-6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42.scope: Deactivated successfully. Jul 7 00:18:13.555065 containerd[1801]: time="2025-07-07T00:18:13.555045608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" id:\"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" pid:3272 exit_status:137 exited_at:{seconds:1751847493 nanos:554744096}" Jul 7 00:18:13.576196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42-rootfs.mount: Deactivated successfully. Jul 7 00:18:13.578921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f-rootfs.mount: Deactivated successfully. Jul 7 00:18:13.593988 containerd[1801]: time="2025-07-07T00:18:13.593881209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\" id:\"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\" pid:3317 exit_status:137 exited_at:{seconds:1751847493 nanos:556940600}" Jul 7 00:18:13.593988 containerd[1801]: time="2025-07-07T00:18:13.593936126Z" level=info msg="shim disconnected" id=b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f namespace=k8s.io Jul 7 00:18:13.593988 containerd[1801]: time="2025-07-07T00:18:13.593947823Z" level=warning msg="cleaning up after shim disconnected" id=b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f namespace=k8s.io Jul 7 00:18:13.593988 containerd[1801]: time="2025-07-07T00:18:13.593954977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:18:13.594153 containerd[1801]: time="2025-07-07T00:18:13.594121949Z" level=info msg="received exit event sandbox_id:\"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\" exit_status:137 exited_at:{seconds:1751847493 nanos:556940600}" Jul 7 00:18:13.595897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42-shm.mount: Deactivated successfully. Jul 7 00:18:13.596532 containerd[1801]: time="2025-07-07T00:18:13.596348545Z" level=info msg="shim disconnected" id=6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42 namespace=k8s.io Jul 7 00:18:13.596532 containerd[1801]: time="2025-07-07T00:18:13.596455973Z" level=warning msg="cleaning up after shim disconnected" id=6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42 namespace=k8s.io Jul 7 00:18:13.596532 containerd[1801]: time="2025-07-07T00:18:13.596464405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:18:13.596694 containerd[1801]: time="2025-07-07T00:18:13.596680716Z" level=info msg="received exit event sandbox_id:\"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" exit_status:137 exited_at:{seconds:1751847493 nanos:554744096}" Jul 7 00:18:13.598718 containerd[1801]: time="2025-07-07T00:18:13.598696090Z" level=info msg="TearDown network for sandbox \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" successfully" Jul 7 00:18:13.598718 containerd[1801]: time="2025-07-07T00:18:13.598718847Z" level=info msg="StopPodSandbox for \"6b0f1335dc0700ffe7af3dca263ea10d20c43543da3160013ab72b22452a7c42\" returns successfully" Jul 7 00:18:13.600684 containerd[1801]: time="2025-07-07T00:18:13.600661388Z" level=info msg="TearDown network for sandbox \"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\" successfully" Jul 7 00:18:13.600684 containerd[1801]: time="2025-07-07T00:18:13.600684297Z" level=info msg="StopPodSandbox for \"b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f\" returns successfully" Jul 7 00:18:13.646809 kubelet[3162]: I0707 00:18:13.646789 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58wnq\" (UniqueName: \"kubernetes.io/projected/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-kube-api-access-58wnq\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647722 kubelet[3162]: I0707 00:18:13.646819 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-config-path\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647722 kubelet[3162]: I0707 00:18:13.646836 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19fd64a8-a256-4e83-b986-dd60e5e84afd-cilium-config-path\") pod \"19fd64a8-a256-4e83-b986-dd60e5e84afd\" (UID: \"19fd64a8-a256-4e83-b986-dd60e5e84afd\") " Jul 7 00:18:13.647722 kubelet[3162]: I0707 00:18:13.646850 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-cgroup\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647722 kubelet[3162]: I0707 00:18:13.646868 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-clustermesh-secrets\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647722 kubelet[3162]: I0707 00:18:13.646884 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-run\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647722 kubelet[3162]: I0707 00:18:13.646897 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-lib-modules\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647856 kubelet[3162]: I0707 00:18:13.646912 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-xtables-lock\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647856 kubelet[3162]: I0707 00:18:13.646927 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-hostproc\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647856 kubelet[3162]: I0707 00:18:13.646939 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cni-path\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647856 kubelet[3162]: I0707 00:18:13.646956 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-host-proc-sys-net\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647856 kubelet[3162]: I0707 00:18:13.646973 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-hubble-tls\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647856 kubelet[3162]: I0707 00:18:13.646986 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-etc-cni-netd\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647974 kubelet[3162]: I0707 00:18:13.647001 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-bpf-maps\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647974 kubelet[3162]: I0707 00:18:13.647014 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-host-proc-sys-kernel\") pod \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\" (UID: \"a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad\") " Jul 7 00:18:13.647974 kubelet[3162]: I0707 00:18:13.647029 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkbls\" (UniqueName: \"kubernetes.io/projected/19fd64a8-a256-4e83-b986-dd60e5e84afd-kube-api-access-mkbls\") pod \"19fd64a8-a256-4e83-b986-dd60e5e84afd\" (UID: \"19fd64a8-a256-4e83-b986-dd60e5e84afd\") " Jul 7 00:18:13.648199 kubelet[3162]: I0707 00:18:13.648175 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.649397 kubelet[3162]: I0707 00:18:13.649373 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fd64a8-a256-4e83-b986-dd60e5e84afd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19fd64a8-a256-4e83-b986-dd60e5e84afd" (UID: "19fd64a8-a256-4e83-b986-dd60e5e84afd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:18:13.650272 kubelet[3162]: I0707 00:18:13.650250 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.650330 kubelet[3162]: I0707 00:18:13.650304 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.650330 kubelet[3162]: I0707 00:18:13.650317 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.650330 kubelet[3162]: I0707 00:18:13.650328 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-hostproc" (OuterVolumeSpecName: "hostproc") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.650404 kubelet[3162]: I0707 00:18:13.650340 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cni-path" (OuterVolumeSpecName: "cni-path") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.650404 kubelet[3162]: I0707 00:18:13.650353 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.650870 kubelet[3162]: I0707 00:18:13.650809 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.650870 kubelet[3162]: I0707 00:18:13.650835 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.650870 kubelet[3162]: I0707 00:18:13.650849 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:18:13.651160 kubelet[3162]: I0707 00:18:13.651132 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fd64a8-a256-4e83-b986-dd60e5e84afd-kube-api-access-mkbls" (OuterVolumeSpecName: "kube-api-access-mkbls") pod "19fd64a8-a256-4e83-b986-dd60e5e84afd" (UID: "19fd64a8-a256-4e83-b986-dd60e5e84afd"). InnerVolumeSpecName "kube-api-access-mkbls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:18:13.651866 kubelet[3162]: I0707 00:18:13.651825 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:18:13.652727 kubelet[3162]: I0707 00:18:13.652700 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:18:13.653365 kubelet[3162]: I0707 00:18:13.653347 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-kube-api-access-58wnq" (OuterVolumeSpecName: "kube-api-access-58wnq") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "kube-api-access-58wnq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:18:13.653460 kubelet[3162]: I0707 00:18:13.653441 3162 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" (UID: "a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:18:13.747208 kubelet[3162]: I0707 00:18:13.747194 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-run\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747363 kubelet[3162]: I0707 00:18:13.747212 3162 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-lib-modules\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747363 kubelet[3162]: I0707 00:18:13.747220 3162 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-xtables-lock\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747363 kubelet[3162]: I0707 00:18:13.747228 3162 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-hostproc\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747363 kubelet[3162]: I0707 00:18:13.747236 3162 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cni-path\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747363 kubelet[3162]: I0707 00:18:13.747245 3162 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-etc-cni-netd\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747363 kubelet[3162]: I0707 00:18:13.747254 3162 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-host-proc-sys-net\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747363 kubelet[3162]: I0707 00:18:13.747262 3162 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-hubble-tls\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747363 kubelet[3162]: I0707 00:18:13.747271 3162 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-bpf-maps\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747539 kubelet[3162]: I0707 00:18:13.747278 3162 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-host-proc-sys-kernel\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747539 kubelet[3162]: I0707 00:18:13.747287 3162 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mkbls\" (UniqueName: \"kubernetes.io/projected/19fd64a8-a256-4e83-b986-dd60e5e84afd-kube-api-access-mkbls\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747539 kubelet[3162]: I0707 00:18:13.747297 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19fd64a8-a256-4e83-b986-dd60e5e84afd-cilium-config-path\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747539 kubelet[3162]: I0707 00:18:13.747305 3162 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-58wnq\" (UniqueName: \"kubernetes.io/projected/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-kube-api-access-58wnq\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747539 kubelet[3162]: I0707 00:18:13.747314 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-config-path\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747539 kubelet[3162]: I0707 00:18:13.747322 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-cilium-cgroup\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.747539 kubelet[3162]: I0707 00:18:13.747331 3162 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad-clustermesh-secrets\") on node \"ci-4344.1.1-a-19494a3847\" DevicePath \"\"" Jul 7 00:18:13.748743 kubelet[3162]: I0707 00:18:13.748728 3162 scope.go:117] "RemoveContainer" containerID="000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f" Jul 7 00:18:13.750827 containerd[1801]: time="2025-07-07T00:18:13.750338786Z" level=info msg="RemoveContainer for \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\"" Jul 7 00:18:13.754491 systemd[1]: Removed slice kubepods-besteffort-pod19fd64a8_a256_4e83_b986_dd60e5e84afd.slice - libcontainer container kubepods-besteffort-pod19fd64a8_a256_4e83_b986_dd60e5e84afd.slice. Jul 7 00:18:13.758986 systemd[1]: Removed slice kubepods-burstable-poda8e9ddbb_b1ea_4bd5_a67f_043ca03fa2ad.slice - libcontainer container kubepods-burstable-poda8e9ddbb_b1ea_4bd5_a67f_043ca03fa2ad.slice. Jul 7 00:18:13.759606 systemd[1]: kubepods-burstable-poda8e9ddbb_b1ea_4bd5_a67f_043ca03fa2ad.slice: Consumed 4.769s CPU time, 124.4M memory peak, 128K read from disk, 14M written to disk. Jul 7 00:18:13.762919 containerd[1801]: time="2025-07-07T00:18:13.762898894Z" level=info msg="RemoveContainer for \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" returns successfully" Jul 7 00:18:13.763114 kubelet[3162]: I0707 00:18:13.763102 3162 scope.go:117] "RemoveContainer" containerID="000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f" Jul 7 00:18:13.763331 containerd[1801]: time="2025-07-07T00:18:13.763297701Z" level=error msg="ContainerStatus for \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\": not found" Jul 7 00:18:13.763415 kubelet[3162]: E0707 00:18:13.763396 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\": not found" containerID="000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f" Jul 7 00:18:13.763500 kubelet[3162]: I0707 00:18:13.763419 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f"} err="failed to get container status \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\": rpc error: code = NotFound desc = an error occurred when try to find container \"000e5c323fd944dd57499be1605759a783fcfa6c1158812b8fe4987b25a5631f\": not found" Jul 7 00:18:13.763500 kubelet[3162]: I0707 00:18:13.763498 3162 scope.go:117] "RemoveContainer" containerID="6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177" Jul 7 00:18:13.764966 containerd[1801]: time="2025-07-07T00:18:13.764940864Z" level=info msg="RemoveContainer for \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\"" Jul 7 00:18:13.773637 containerd[1801]: time="2025-07-07T00:18:13.773613383Z" level=info msg="RemoveContainer for \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" returns successfully" Jul 7 00:18:13.773760 kubelet[3162]: I0707 00:18:13.773746 3162 scope.go:117] "RemoveContainer" containerID="930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b" Jul 7 00:18:13.775416 containerd[1801]: time="2025-07-07T00:18:13.775386050Z" level=info msg="RemoveContainer for \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\"" Jul 7 00:18:13.783244 containerd[1801]: time="2025-07-07T00:18:13.783209204Z" level=info msg="RemoveContainer for \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\" returns successfully" Jul 7 00:18:13.783372 kubelet[3162]: I0707 00:18:13.783355 3162 scope.go:117] "RemoveContainer" containerID="9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222" Jul 7 00:18:13.785774 containerd[1801]: time="2025-07-07T00:18:13.785750722Z" level=info msg="RemoveContainer for \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\"" Jul 7 00:18:13.793142 containerd[1801]: time="2025-07-07T00:18:13.793119155Z" level=info msg="RemoveContainer for \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\" returns successfully" Jul 7 00:18:13.793264 kubelet[3162]: I0707 00:18:13.793246 3162 scope.go:117] "RemoveContainer" containerID="6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e" Jul 7 00:18:13.794315 containerd[1801]: time="2025-07-07T00:18:13.794277962Z" level=info msg="RemoveContainer for \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\"" Jul 7 00:18:13.805865 containerd[1801]: time="2025-07-07T00:18:13.805844961Z" level=info msg="RemoveContainer for \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\" returns successfully" Jul 7 00:18:13.806014 kubelet[3162]: I0707 00:18:13.806003 3162 scope.go:117] "RemoveContainer" containerID="2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388" Jul 7 00:18:13.806945 containerd[1801]: time="2025-07-07T00:18:13.806927366Z" level=info msg="RemoveContainer for \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\"" Jul 7 00:18:13.814308 containerd[1801]: time="2025-07-07T00:18:13.814288233Z" level=info msg="RemoveContainer for \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\" returns successfully" Jul 7 00:18:13.814448 kubelet[3162]: I0707 00:18:13.814421 3162 scope.go:117] "RemoveContainer" containerID="6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177" Jul 7 00:18:13.814609 containerd[1801]: time="2025-07-07T00:18:13.814576136Z" level=error msg="ContainerStatus for \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\": not found" Jul 7 00:18:13.814698 kubelet[3162]: E0707 00:18:13.814683 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\": not found" containerID="6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177" Jul 7 00:18:13.814734 kubelet[3162]: I0707 00:18:13.814703 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177"} err="failed to get container status \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f0db7bbaa783a0f8fb0a4f72495e49376c481c2bbcb3a2fd05ee646c4e3b177\": not found" Jul 7 00:18:13.814734 kubelet[3162]: I0707 00:18:13.814719 3162 scope.go:117] "RemoveContainer" containerID="930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b" Jul 7 00:18:13.814855 containerd[1801]: time="2025-07-07T00:18:13.814831521Z" level=error msg="ContainerStatus for \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\": not found" Jul 7 00:18:13.814939 kubelet[3162]: E0707 00:18:13.814920 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\": not found" containerID="930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b" Jul 7 00:18:13.814968 kubelet[3162]: I0707 00:18:13.814942 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b"} err="failed to get container status \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\": rpc error: code = NotFound desc = an error occurred when try to find container \"930efae50470cfbd9f195322d78581f41445bac0d6c85d220c887e70df69212b\": not found" Jul 7 00:18:13.814968 kubelet[3162]: I0707 00:18:13.814960 3162 scope.go:117] "RemoveContainer" containerID="9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222" Jul 7 00:18:13.815123 containerd[1801]: time="2025-07-07T00:18:13.815082356Z" level=error msg="ContainerStatus for \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\": not found" Jul 7 00:18:13.815185 kubelet[3162]: E0707 00:18:13.815171 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\": not found" containerID="9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222" Jul 7 00:18:13.815249 kubelet[3162]: I0707 00:18:13.815187 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222"} err="failed to get container status \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e0d933b9d82e0e6d16f063f0199aa309fe43dae66af0b0bf0909a839c833222\": not found" Jul 7 00:18:13.815281 kubelet[3162]: I0707 00:18:13.815260 3162 scope.go:117] "RemoveContainer" containerID="6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e" Jul 7 00:18:13.815409 containerd[1801]: time="2025-07-07T00:18:13.815381120Z" level=error msg="ContainerStatus for \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\": not found" Jul 7 00:18:13.815474 kubelet[3162]: E0707 00:18:13.815461 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\": not found" containerID="6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e" Jul 7 00:18:13.815503 kubelet[3162]: I0707 00:18:13.815478 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e"} err="failed to get container status \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\": rpc error: code = NotFound desc = an error occurred when try to find container \"6181ba617de3b49288c1725c777942b1ac05cd061f8e16b4aadd28f2f8dd997e\": not found" Jul 7 00:18:13.815503 kubelet[3162]: I0707 00:18:13.815497 3162 scope.go:117] "RemoveContainer" containerID="2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388" Jul 7 00:18:13.815669 containerd[1801]: time="2025-07-07T00:18:13.815629218Z" level=error msg="ContainerStatus for \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\": not found" Jul 7 00:18:13.815778 kubelet[3162]: E0707 00:18:13.815755 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\": not found" containerID="2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388" Jul 7 00:18:13.815815 kubelet[3162]: I0707 00:18:13.815771 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388"} err="failed to get container status \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\": rpc error: code = NotFound desc = an error occurred when try to find container \"2518fe73179ba729d3a02817a40c59eb73a0db5da4dc10b492a690eef3870388\": not found" Jul 7 00:18:14.422564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b312406b96cd4c3daf0fb6746b0c8194111d5f40cbed014cdc69a7bfd869736f-shm.mount: Deactivated successfully. Jul 7 00:18:14.422635 systemd[1]: var-lib-kubelet-pods-19fd64a8\x2da256\x2d4e83\x2db986\x2ddd60e5e84afd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmkbls.mount: Deactivated successfully. Jul 7 00:18:14.422692 systemd[1]: var-lib-kubelet-pods-a8e9ddbb\x2db1ea\x2d4bd5\x2da67f\x2d043ca03fa2ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58wnq.mount: Deactivated successfully. Jul 7 00:18:14.422830 systemd[1]: var-lib-kubelet-pods-a8e9ddbb\x2db1ea\x2d4bd5\x2da67f\x2d043ca03fa2ad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:18:14.422888 systemd[1]: var-lib-kubelet-pods-a8e9ddbb\x2db1ea\x2d4bd5\x2da67f\x2d043ca03fa2ad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:18:14.485464 kubelet[3162]: I0707 00:18:14.485444 3162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fd64a8-a256-4e83-b986-dd60e5e84afd" path="/var/lib/kubelet/pods/19fd64a8-a256-4e83-b986-dd60e5e84afd/volumes" Jul 7 00:18:14.485765 kubelet[3162]: I0707 00:18:14.485753 3162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" path="/var/lib/kubelet/pods/a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad/volumes" Jul 7 00:18:15.429169 sshd[4678]: Connection closed by 10.200.16.10 port 49038 Jul 7 00:18:15.429441 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:15.432469 systemd[1]: sshd@21-10.200.4.38:22-10.200.16.10:49038.service: Deactivated successfully. Jul 7 00:18:15.433823 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:18:15.434450 systemd-logind[1712]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:18:15.435625 systemd-logind[1712]: Removed session 24. Jul 7 00:18:15.535844 systemd[1]: Started sshd@22-10.200.4.38:22-10.200.16.10:49048.service - OpenSSH per-connection server daemon (10.200.16.10:49048). Jul 7 00:18:16.128377 sshd[4828]: Accepted publickey for core from 10.200.16.10 port 49048 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:18:16.129186 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:16.132377 systemd-logind[1712]: New session 25 of user core. Jul 7 00:18:16.138639 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:18:16.552538 kubelet[3162]: E0707 00:18:16.552422 3162 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:18:16.839200 kubelet[3162]: I0707 00:18:16.838840 3162 memory_manager.go:355] "RemoveStaleState removing state" podUID="19fd64a8-a256-4e83-b986-dd60e5e84afd" containerName="cilium-operator" Jul 7 00:18:16.839200 kubelet[3162]: I0707 00:18:16.839131 3162 memory_manager.go:355] "RemoveStaleState removing state" podUID="a8e9ddbb-b1ea-4bd5-a67f-043ca03fa2ad" containerName="cilium-agent" Jul 7 00:18:16.848857 systemd[1]: Created slice kubepods-burstable-pod3987cd80_e351_4d02_83c7_02fcc52ae437.slice - libcontainer container kubepods-burstable-pod3987cd80_e351_4d02_83c7_02fcc52ae437.slice. Jul 7 00:18:16.931597 sshd[4830]: Connection closed by 10.200.16.10 port 49048 Jul 7 00:18:16.932968 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:16.936513 systemd[1]: sshd@22-10.200.4.38:22-10.200.16.10:49048.service: Deactivated successfully. Jul 7 00:18:16.938892 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:18:16.940347 systemd-logind[1712]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:18:16.941295 systemd-logind[1712]: Removed session 25. Jul 7 00:18:16.964568 kubelet[3162]: I0707 00:18:16.964549 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-lib-modules\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964683 kubelet[3162]: I0707 00:18:16.964575 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-xtables-lock\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964683 kubelet[3162]: I0707 00:18:16.964593 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnl46\" (UniqueName: \"kubernetes.io/projected/3987cd80-e351-4d02-83c7-02fcc52ae437-kube-api-access-fnl46\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964683 kubelet[3162]: I0707 00:18:16.964610 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-cilium-run\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964683 kubelet[3162]: I0707 00:18:16.964623 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3987cd80-e351-4d02-83c7-02fcc52ae437-clustermesh-secrets\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964683 kubelet[3162]: I0707 00:18:16.964636 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3987cd80-e351-4d02-83c7-02fcc52ae437-hubble-tls\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964683 kubelet[3162]: I0707 00:18:16.964649 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-hostproc\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964901 kubelet[3162]: I0707 00:18:16.964663 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-etc-cni-netd\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964901 kubelet[3162]: I0707 00:18:16.964677 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-bpf-maps\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964901 kubelet[3162]: I0707 00:18:16.964690 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3987cd80-e351-4d02-83c7-02fcc52ae437-cilium-ipsec-secrets\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964901 kubelet[3162]: I0707 00:18:16.964706 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-cni-path\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964901 kubelet[3162]: I0707 00:18:16.964722 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-cilium-cgroup\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964901 kubelet[3162]: I0707 00:18:16.964742 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-host-proc-sys-kernel\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964982 kubelet[3162]: I0707 00:18:16.964760 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3987cd80-e351-4d02-83c7-02fcc52ae437-cilium-config-path\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:16.964982 kubelet[3162]: I0707 00:18:16.964774 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3987cd80-e351-4d02-83c7-02fcc52ae437-host-proc-sys-net\") pod \"cilium-rrdv2\" (UID: \"3987cd80-e351-4d02-83c7-02fcc52ae437\") " pod="kube-system/cilium-rrdv2" Jul 7 00:18:17.035686 systemd[1]: Started sshd@23-10.200.4.38:22-10.200.16.10:49054.service - OpenSSH per-connection server daemon (10.200.16.10:49054). Jul 7 00:18:17.154856 containerd[1801]: time="2025-07-07T00:18:17.154824497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rrdv2,Uid:3987cd80-e351-4d02-83c7-02fcc52ae437,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:17.185882 containerd[1801]: time="2025-07-07T00:18:17.185826851Z" level=info msg="connecting to shim 2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d" address="unix:///run/containerd/s/530a816ab64533fc43df52e2b78c09706277fb5c82cfd24dc40abe3a31250c37" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:17.207648 systemd[1]: Started cri-containerd-2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d.scope - libcontainer container 2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d. Jul 7 00:18:17.225865 containerd[1801]: time="2025-07-07T00:18:17.225845035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rrdv2,Uid:3987cd80-e351-4d02-83c7-02fcc52ae437,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\"" Jul 7 00:18:17.228127 containerd[1801]: time="2025-07-07T00:18:17.228104836Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:18:17.242672 containerd[1801]: time="2025-07-07T00:18:17.242650730Z" level=info msg="Container 6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:17.255103 containerd[1801]: time="2025-07-07T00:18:17.255082087Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7\"" Jul 7 00:18:17.256075 containerd[1801]: time="2025-07-07T00:18:17.255434538Z" level=info msg="StartContainer for \"6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7\"" Jul 7 00:18:17.256137 containerd[1801]: time="2025-07-07T00:18:17.256078431Z" level=info msg="connecting to shim 6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7" address="unix:///run/containerd/s/530a816ab64533fc43df52e2b78c09706277fb5c82cfd24dc40abe3a31250c37" protocol=ttrpc version=3 Jul 7 00:18:17.281669 systemd[1]: Started cri-containerd-6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7.scope - libcontainer container 6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7. Jul 7 00:18:17.304126 containerd[1801]: time="2025-07-07T00:18:17.304067427Z" level=info msg="StartContainer for \"6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7\" returns successfully" Jul 7 00:18:17.306269 systemd[1]: cri-containerd-6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7.scope: Deactivated successfully. Jul 7 00:18:17.308249 containerd[1801]: time="2025-07-07T00:18:17.308168523Z" level=info msg="received exit event container_id:\"6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7\" id:\"6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7\" pid:4904 exited_at:{seconds:1751847497 nanos:308006897}" Jul 7 00:18:17.308314 containerd[1801]: time="2025-07-07T00:18:17.308230940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7\" id:\"6a609f4cde0d759d3be36de4182f1c8cc81d38e517ed002286e4653d05d099f7\" pid:4904 exited_at:{seconds:1751847497 nanos:308006897}" Jul 7 00:18:17.634184 sshd[4840]: Accepted publickey for core from 10.200.16.10 port 49054 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:18:17.635001 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:17.638469 systemd-logind[1712]: New session 26 of user core. Jul 7 00:18:17.645604 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:18:17.764645 containerd[1801]: time="2025-07-07T00:18:17.764626698Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:18:17.781395 containerd[1801]: time="2025-07-07T00:18:17.781372420Z" level=info msg="Container 07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:17.797852 containerd[1801]: time="2025-07-07T00:18:17.797824913Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3\"" Jul 7 00:18:17.798251 containerd[1801]: time="2025-07-07T00:18:17.798165209Z" level=info msg="StartContainer for \"07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3\"" Jul 7 00:18:17.799069 containerd[1801]: time="2025-07-07T00:18:17.799029875Z" level=info msg="connecting to shim 07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3" address="unix:///run/containerd/s/530a816ab64533fc43df52e2b78c09706277fb5c82cfd24dc40abe3a31250c37" protocol=ttrpc version=3 Jul 7 00:18:17.815659 systemd[1]: Started cri-containerd-07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3.scope - libcontainer container 07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3. Jul 7 00:18:17.836814 containerd[1801]: time="2025-07-07T00:18:17.836785443Z" level=info msg="StartContainer for \"07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3\" returns successfully" Jul 7 00:18:17.838402 systemd[1]: cri-containerd-07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3.scope: Deactivated successfully. Jul 7 00:18:17.839964 containerd[1801]: time="2025-07-07T00:18:17.839592892Z" level=info msg="received exit event container_id:\"07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3\" id:\"07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3\" pid:4949 exited_at:{seconds:1751847497 nanos:839043731}" Jul 7 00:18:17.840160 containerd[1801]: time="2025-07-07T00:18:17.840064626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3\" id:\"07714c54f6aba7539fdf244d41a61eb726fc728de2e91bdbe0548edbd52cd3f3\" pid:4949 exited_at:{seconds:1751847497 nanos:839043731}" Jul 7 00:18:18.055660 sshd[4934]: Connection closed by 10.200.16.10 port 49054 Jul 7 00:18:18.056040 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:18.057911 systemd[1]: sshd@23-10.200.4.38:22-10.200.16.10:49054.service: Deactivated successfully. Jul 7 00:18:18.059237 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:18:18.060689 systemd-logind[1712]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:18:18.061488 systemd-logind[1712]: Removed session 26. Jul 7 00:18:18.164761 systemd[1]: Started sshd@24-10.200.4.38:22-10.200.16.10:49066.service - OpenSSH per-connection server daemon (10.200.16.10:49066). Jul 7 00:18:18.759270 sshd[4987]: Accepted publickey for core from 10.200.16.10 port 49066 ssh2: RSA SHA256:Wk8evofofxp89naqeEXik3LEsXEGQ/OLJ6+4xrs86aY Jul 7 00:18:18.760109 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:18.762981 systemd-logind[1712]: New session 27 of user core. Jul 7 00:18:18.766779 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:18:18.770049 containerd[1801]: time="2025-07-07T00:18:18.769660692Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:18:18.796551 containerd[1801]: time="2025-07-07T00:18:18.795595732Z" level=info msg="Container 4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:18.796226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528663374.mount: Deactivated successfully. Jul 7 00:18:18.814344 containerd[1801]: time="2025-07-07T00:18:18.814323430Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669\"" Jul 7 00:18:18.814691 containerd[1801]: time="2025-07-07T00:18:18.814638326Z" level=info msg="StartContainer for \"4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669\"" Jul 7 00:18:18.818686 containerd[1801]: time="2025-07-07T00:18:18.818425723Z" level=info msg="connecting to shim 4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669" address="unix:///run/containerd/s/530a816ab64533fc43df52e2b78c09706277fb5c82cfd24dc40abe3a31250c37" protocol=ttrpc version=3 Jul 7 00:18:18.837668 systemd[1]: Started cri-containerd-4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669.scope - libcontainer container 4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669. Jul 7 00:18:18.859462 systemd[1]: cri-containerd-4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669.scope: Deactivated successfully. Jul 7 00:18:18.861752 containerd[1801]: time="2025-07-07T00:18:18.860757414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669\" id:\"4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669\" pid:5004 exited_at:{seconds:1751847498 nanos:860325761}" Jul 7 00:18:18.863292 containerd[1801]: time="2025-07-07T00:18:18.863267264Z" level=info msg="received exit event container_id:\"4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669\" id:\"4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669\" pid:5004 exited_at:{seconds:1751847498 nanos:860325761}" Jul 7 00:18:18.868613 containerd[1801]: time="2025-07-07T00:18:18.868591231Z" level=info msg="StartContainer for \"4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669\" returns successfully" Jul 7 00:18:19.070626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a9b23fb2d09653a6c1e1f95c19fa5853a25b45c2b1a579d9206c79cb082b669-rootfs.mount: Deactivated successfully. Jul 7 00:18:19.772512 containerd[1801]: time="2025-07-07T00:18:19.772416253Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:18:19.792542 containerd[1801]: time="2025-07-07T00:18:19.789665015Z" level=info msg="Container 28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:19.807183 containerd[1801]: time="2025-07-07T00:18:19.807160850Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39\"" Jul 7 00:18:19.807490 containerd[1801]: time="2025-07-07T00:18:19.807475369Z" level=info msg="StartContainer for \"28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39\"" Jul 7 00:18:19.808429 containerd[1801]: time="2025-07-07T00:18:19.808380376Z" level=info msg="connecting to shim 28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39" address="unix:///run/containerd/s/530a816ab64533fc43df52e2b78c09706277fb5c82cfd24dc40abe3a31250c37" protocol=ttrpc version=3 Jul 7 00:18:19.827671 systemd[1]: Started cri-containerd-28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39.scope - libcontainer container 28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39. Jul 7 00:18:19.845172 systemd[1]: cri-containerd-28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39.scope: Deactivated successfully. Jul 7 00:18:19.845782 containerd[1801]: time="2025-07-07T00:18:19.845754983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39\" id:\"28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39\" pid:5052 exited_at:{seconds:1751847499 nanos:845564111}" Jul 7 00:18:19.849799 containerd[1801]: time="2025-07-07T00:18:19.849714714Z" level=info msg="received exit event container_id:\"28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39\" id:\"28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39\" pid:5052 exited_at:{seconds:1751847499 nanos:845564111}" Jul 7 00:18:19.854411 containerd[1801]: time="2025-07-07T00:18:19.854383076Z" level=info msg="StartContainer for \"28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39\" returns successfully" Jul 7 00:18:19.862667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e74a72162d808f53d714f851a585e10f18bd3c8148de10742be829b8b3eb39-rootfs.mount: Deactivated successfully. Jul 7 00:18:20.026247 kubelet[3162]: I0707 00:18:20.026167 3162 setters.go:602] "Node became not ready" node="ci-4344.1.1-a-19494a3847" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:18:20Z","lastTransitionTime":"2025-07-07T00:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:18:20.776388 containerd[1801]: time="2025-07-07T00:18:20.776365162Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:18:20.796758 containerd[1801]: time="2025-07-07T00:18:20.796073745Z" level=info msg="Container df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:20.799393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974001534.mount: Deactivated successfully. Jul 7 00:18:20.809896 containerd[1801]: time="2025-07-07T00:18:20.809870459Z" level=info msg="CreateContainer within sandbox \"2ad37a8f6984a6973b2546d1b3a911ff9abe526e899ac92297210bee6b6a7b2d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45\"" Jul 7 00:18:20.810246 containerd[1801]: time="2025-07-07T00:18:20.810189539Z" level=info msg="StartContainer for \"df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45\"" Jul 7 00:18:20.811244 containerd[1801]: time="2025-07-07T00:18:20.810997465Z" level=info msg="connecting to shim df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45" address="unix:///run/containerd/s/530a816ab64533fc43df52e2b78c09706277fb5c82cfd24dc40abe3a31250c37" protocol=ttrpc version=3 Jul 7 00:18:20.828661 systemd[1]: Started cri-containerd-df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45.scope - libcontainer container df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45. Jul 7 00:18:20.852962 containerd[1801]: time="2025-07-07T00:18:20.852937102Z" level=info msg="StartContainer for \"df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45\" returns successfully" Jul 7 00:18:20.931536 containerd[1801]: time="2025-07-07T00:18:20.931487015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45\" id:\"84db8214c862a00a2c985b40364cb0dacdf22d3b9afa1d94fe84af702e2174ea\" pid:5126 exited_at:{seconds:1751847500 nanos:931312301}" Jul 7 00:18:21.187552 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jul 7 00:18:21.484543 kubelet[3162]: E0707 00:18:21.484469 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-k8htz" podUID="ab88c887-6469-4e6e-8419-c0cdd80fd765" Jul 7 00:18:23.221601 containerd[1801]: time="2025-07-07T00:18:23.221553511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45\" id:\"24dec176bd31e51e7335592e0228409f6bdc595906d36825c72d370d900f2406\" pid:5521 exit_status:1 exited_at:{seconds:1751847503 nanos:221252123}" Jul 7 00:18:23.491793 systemd-networkd[1369]: lxc_health: Link UP Jul 7 00:18:23.506055 systemd-networkd[1369]: lxc_health: Gained carrier Jul 7 00:18:25.079704 systemd-networkd[1369]: lxc_health: Gained IPv6LL Jul 7 00:18:25.175342 kubelet[3162]: I0707 00:18:25.175273 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rrdv2" podStartSLOduration=9.175249613 podStartE2EDuration="9.175249613s" podCreationTimestamp="2025-07-07 00:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:18:21.792697978 +0000 UTC m=+165.388084723" watchObservedRunningTime="2025-07-07 00:18:25.175249613 +0000 UTC m=+168.770636357" Jul 7 00:18:25.403562 containerd[1801]: time="2025-07-07T00:18:25.403518404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45\" id:\"697296eabf9fefd1b7da7dbe1204f0ed44015bcbc8af51c2c125af433c97012c\" pid:5655 exited_at:{seconds:1751847505 nanos:403155105}" Jul 7 00:18:27.476174 containerd[1801]: time="2025-07-07T00:18:27.475922739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45\" id:\"c12d3742e3b541c82558d593268fa1578943385567d1d727b2e96f706f6f3a18\" pid:5687 exited_at:{seconds:1751847507 nanos:475675450}" Jul 7 00:18:27.479023 kubelet[3162]: E0707 00:18:27.478602 3162 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52362->127.0.0.1:33929: write tcp 127.0.0.1:52362->127.0.0.1:33929: write: broken pipe Jul 7 00:18:29.545665 containerd[1801]: time="2025-07-07T00:18:29.545623763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df129a06cf71c39db5ed80f4b1eb146934fbf3989b28f6ad290debef3ca12c45\" id:\"07fce70312745bdbe59bad5910ce3cd605ac4662ec29ca5e438750f454116c1b\" pid:5709 exited_at:{seconds:1751847509 nanos:545229110}" Jul 7 00:18:29.656142 sshd[4989]: Connection closed by 10.200.16.10 port 49066 Jul 7 00:18:29.656619 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:29.659072 systemd[1]: sshd@24-10.200.4.38:22-10.200.16.10:49066.service: Deactivated successfully. Jul 7 00:18:29.660681 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:18:29.662077 systemd-logind[1712]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:18:29.663246 systemd-logind[1712]: Removed session 27.