Aug 13 00:25:12.989861 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:25:12.989888 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:25:12.989899 kernel: BIOS-provided physical RAM map: Aug 13 00:25:12.989906 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:25:12.989913 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 00:25:12.989920 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Aug 13 00:25:12.989928 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Aug 13 00:25:12.989937 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Aug 13 00:25:12.989944 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Aug 13 00:25:12.989951 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 13 00:25:12.989958 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 00:25:12.989965 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 00:25:12.989972 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 00:25:12.989978 kernel: printk: legacy bootconsole [earlyser0] enabled Aug 13 00:25:12.989988 kernel: NX (Execute Disable) protection: active Aug 13 00:25:12.989996 kernel: APIC: Static calls initialized Aug 13 00:25:12.990003 kernel: efi: EFI v2.7 by Microsoft Aug 13 00:25:12.990071 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eac0018 RNG=0x3ffd2018 Aug 13 00:25:12.990079 kernel: random: crng init done Aug 13 00:25:12.990086 kernel: secureboot: Secure boot disabled Aug 13 00:25:12.990094 kernel: SMBIOS 3.1.0 present. Aug 13 00:25:12.990101 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Aug 13 00:25:12.990109 kernel: DMI: Memory slots populated: 2/2 Aug 13 00:25:12.990118 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 00:25:12.990125 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Aug 13 00:25:12.990133 kernel: Hyper-V: Nested features: 0x3e0101 Aug 13 00:25:12.990140 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 00:25:12.990147 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 00:25:12.990154 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 00:25:12.990162 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 00:25:12.990169 kernel: tsc: Detected 2300.001 MHz processor Aug 13 00:25:12.990177 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:25:12.990185 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:25:12.990195 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Aug 13 00:25:12.990203 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 00:25:12.990211 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:25:12.990219 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Aug 13 00:25:12.990226 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Aug 13 00:25:12.990234 kernel: Using GB pages for direct mapping Aug 13 00:25:12.990241 kernel: ACPI: Early table checksum verification disabled Aug 13 00:25:12.990252 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 00:25:12.990261 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:25:12.990270 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:25:12.990278 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:25:12.990285 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 00:25:12.990294 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:25:12.990302 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:25:12.990311 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:25:12.990319 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Aug 13 00:25:12.990327 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Aug 13 00:25:12.990335 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:25:12.990343 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 00:25:12.990351 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Aug 13 00:25:12.990359 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 00:25:12.990367 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 00:25:12.990375 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 00:25:12.990384 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 00:25:12.990393 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Aug 13 00:25:12.990400 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Aug 13 00:25:12.990408 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 00:25:12.990420 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 13 00:25:12.990427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Aug 13 00:25:12.990435 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Aug 13 00:25:12.990443 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Aug 13 00:25:12.990450 kernel: Zone ranges: Aug 13 00:25:12.990459 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:25:12.990467 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:25:12.990475 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:25:12.990482 kernel: Device empty Aug 13 00:25:12.990489 kernel: Movable zone start for each node Aug 13 00:25:12.990496 kernel: Early memory node ranges Aug 13 00:25:12.990503 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:25:12.990511 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Aug 13 00:25:12.990519 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Aug 13 00:25:12.990527 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 00:25:12.990535 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:25:12.990552 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 00:25:12.990560 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:25:12.990567 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:25:12.990574 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Aug 13 00:25:12.990582 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Aug 13 00:25:12.990589 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 00:25:12.990597 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:25:12.990606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:25:12.990613 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:25:12.990621 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 00:25:12.990628 kernel: TSC deadline timer available Aug 13 00:25:12.990636 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:25:12.990643 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:25:12.990650 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:25:12.990657 kernel: CPU topo: Max. threads per core: 2 Aug 13 00:25:12.990665 kernel: CPU topo: Num. cores per package: 1 Aug 13 00:25:12.990673 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:25:12.990681 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:25:12.990688 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 00:25:12.990696 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 00:25:12.990703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:25:12.990711 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:25:12.990719 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:25:12.990726 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:25:12.990733 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:25:12.990745 kernel: Hyper-V: PV spinlocks enabled Aug 13 00:25:12.990753 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:25:12.990762 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:25:12.990769 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:25:12.990777 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 00:25:12.990785 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:25:12.990792 kernel: Fallback order for Node 0: 0 Aug 13 00:25:12.990800 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Aug 13 00:25:12.990808 kernel: Policy zone: Normal Aug 13 00:25:12.990816 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:25:12.990823 kernel: software IO TLB: area num 2. Aug 13 00:25:12.990830 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:25:12.990838 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:25:12.990845 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:25:12.990853 kernel: Dynamic Preempt: voluntary Aug 13 00:25:12.990861 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:25:12.990869 kernel: rcu: RCU event tracing is enabled. Aug 13 00:25:12.990884 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:25:12.990892 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:25:12.990900 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:25:12.990909 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:25:12.990917 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:25:12.990925 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:25:12.990933 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:25:12.990941 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:25:12.990949 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:25:12.990958 kernel: Using NULL legacy PIC Aug 13 00:25:12.990967 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 00:25:12.990975 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:25:12.990983 kernel: Console: colour dummy device 80x25 Aug 13 00:25:12.990991 kernel: printk: legacy console [tty1] enabled Aug 13 00:25:12.990999 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:25:12.991007 kernel: printk: legacy bootconsole [earlyser0] disabled Aug 13 00:25:12.991015 kernel: ACPI: Core revision 20240827 Aug 13 00:25:12.991024 kernel: Failed to register legacy timer interrupt Aug 13 00:25:12.991032 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:25:12.991040 kernel: x2apic enabled Aug 13 00:25:12.991048 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:25:12.991056 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Aug 13 00:25:12.991064 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 00:25:12.991089 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Aug 13 00:25:12.991097 kernel: Hyper-V: Using IPI hypercalls Aug 13 00:25:12.991105 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 13 00:25:12.991114 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 13 00:25:12.991123 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 13 00:25:12.991131 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 13 00:25:12.991139 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 13 00:25:12.991147 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 13 00:25:12.991156 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Aug 13 00:25:12.991164 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300001) Aug 13 00:25:12.991173 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:25:12.991182 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 00:25:12.991191 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 00:25:12.991199 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:25:12.991206 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:25:12.991215 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:25:12.991223 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 00:25:12.991232 kernel: RETBleed: Vulnerable Aug 13 00:25:12.991240 kernel: Speculative Store Bypass: Vulnerable Aug 13 00:25:12.991248 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:25:12.991256 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:25:12.991264 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:25:12.991274 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:25:12.991282 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 00:25:12.991291 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 00:25:12.991299 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 00:25:12.991307 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Aug 13 00:25:12.991316 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Aug 13 00:25:12.991324 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Aug 13 00:25:12.991332 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:25:12.991341 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 00:25:12.991349 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 00:25:12.991357 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 00:25:12.991367 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Aug 13 00:25:12.991375 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Aug 13 00:25:12.991383 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Aug 13 00:25:12.991392 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Aug 13 00:25:12.991400 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:25:12.991408 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:25:12.991417 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:25:12.991425 kernel: landlock: Up and running. Aug 13 00:25:12.991433 kernel: SELinux: Initializing. Aug 13 00:25:12.991441 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:25:12.991450 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:25:12.991458 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Aug 13 00:25:12.991468 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Aug 13 00:25:12.991477 kernel: signal: max sigframe size: 11952 Aug 13 00:25:12.991485 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:25:12.991494 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:25:12.991503 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:25:12.991511 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:25:12.991520 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:25:12.991528 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:25:12.991537 kernel: .... node #0, CPUs: #1 Aug 13 00:25:12.991555 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:25:12.991563 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Aug 13 00:25:12.991571 kernel: Memory: 8077016K/8383228K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 299996K reserved, 0K cma-reserved) Aug 13 00:25:12.991580 kernel: devtmpfs: initialized Aug 13 00:25:12.991588 kernel: x86/mm: Memory block size: 128MB Aug 13 00:25:12.991597 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 00:25:12.991611 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:25:12.991621 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:25:12.991629 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:25:12.991638 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:25:12.991646 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:25:12.991654 kernel: audit: type=2000 audit(1755044709.030:1): state=initialized audit_enabled=0 res=1 Aug 13 00:25:12.991663 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:25:12.991671 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:25:12.991679 kernel: cpuidle: using governor menu Aug 13 00:25:12.991685 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:25:12.991693 kernel: dca service started, version 1.12.1 Aug 13 00:25:12.991701 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Aug 13 00:25:12.991711 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Aug 13 00:25:12.991719 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:25:12.991728 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:25:12.991736 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:25:12.991744 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:25:12.991752 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:25:12.991759 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:25:12.991770 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:25:12.991783 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:25:12.991791 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:25:12.991798 kernel: ACPI: Interpreter enabled Aug 13 00:25:12.991806 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:25:12.991814 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:25:12.991823 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:25:12.991830 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 00:25:12.991838 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 00:25:12.991846 kernel: iommu: Default domain type: Translated Aug 13 00:25:12.991854 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:25:12.991864 kernel: efivars: Registered efivars operations Aug 13 00:25:12.991872 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:25:12.991880 kernel: PCI: System does not support PCI Aug 13 00:25:12.991888 kernel: vgaarb: loaded Aug 13 00:25:12.991896 kernel: clocksource: Switched to clocksource tsc-early Aug 13 00:25:12.991905 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:25:12.991914 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:25:12.991922 kernel: pnp: PnP ACPI init Aug 13 00:25:12.991931 kernel: pnp: PnP ACPI: found 3 devices Aug 13 00:25:12.991941 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:25:12.991950 kernel: NET: Registered PF_INET protocol family Aug 13 00:25:12.991958 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:25:12.991967 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 00:25:12.991976 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:25:12.991985 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:25:12.991994 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 00:25:12.992003 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 00:25:12.992013 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:25:12.992021 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:25:12.992030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:25:12.992038 kernel: NET: Registered PF_XDP protocol family Aug 13 00:25:12.992047 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:25:12.992056 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:25:12.992064 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Aug 13 00:25:12.992073 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Aug 13 00:25:12.992082 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Aug 13 00:25:12.992091 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Aug 13 00:25:12.992100 kernel: clocksource: Switched to clocksource tsc Aug 13 00:25:12.992109 kernel: Initialise system trusted keyrings Aug 13 00:25:12.992117 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 00:25:12.992126 kernel: Key type asymmetric registered Aug 13 00:25:12.992135 kernel: Asymmetric key parser 'x509' registered Aug 13 00:25:12.992143 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:25:12.992152 kernel: io scheduler mq-deadline registered Aug 13 00:25:12.992160 kernel: io scheduler kyber registered Aug 13 00:25:12.992170 kernel: io scheduler bfq registered Aug 13 00:25:12.992179 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:25:12.992187 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:25:12.992195 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:25:12.992204 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 00:25:12.992212 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:25:12.992220 kernel: i8042: PNP: No PS/2 controller found. Aug 13 00:25:12.992345 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 00:25:12.992419 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T00:25:12 UTC (1755044712) Aug 13 00:25:12.992485 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 00:25:12.992495 kernel: intel_pstate: Intel P-state driver initializing Aug 13 00:25:12.992503 kernel: efifb: probing for efifb Aug 13 00:25:12.992512 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:25:12.992520 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:25:12.992529 kernel: efifb: scrolling: redraw Aug 13 00:25:12.992537 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:25:12.992727 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:25:12.992738 kernel: fb0: EFI VGA frame buffer device Aug 13 00:25:12.992747 kernel: pstore: Using crash dump compression: deflate Aug 13 00:25:12.992756 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 00:25:12.992764 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:25:12.992773 kernel: Segment Routing with IPv6 Aug 13 00:25:12.992781 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:25:12.992789 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:25:12.992798 kernel: Key type dns_resolver registered Aug 13 00:25:12.992806 kernel: IPI shorthand broadcast: enabled Aug 13 00:25:12.992816 kernel: sched_clock: Marking stable (3144005197, 100671863)->(3596066357, -351389297) Aug 13 00:25:12.992824 kernel: registered taskstats version 1 Aug 13 00:25:12.992832 kernel: Loading compiled-in X.509 certificates Aug 13 00:25:12.992839 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:25:12.992847 kernel: Demotion targets for Node 0: null Aug 13 00:25:12.992853 kernel: Key type .fscrypt registered Aug 13 00:25:12.992861 kernel: Key type fscrypt-provisioning registered Aug 13 00:25:12.992869 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:25:12.992877 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:25:12.992887 kernel: ima: No architecture policies found Aug 13 00:25:12.992894 kernel: clk: Disabling unused clocks Aug 13 00:25:12.992902 kernel: Warning: unable to open an initial console. Aug 13 00:25:12.992913 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:25:12.992928 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:25:12.992937 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:25:12.992945 kernel: Run /init as init process Aug 13 00:25:12.992953 kernel: with arguments: Aug 13 00:25:12.992962 kernel: /init Aug 13 00:25:12.992971 kernel: with environment: Aug 13 00:25:12.992979 kernel: HOME=/ Aug 13 00:25:12.992989 kernel: TERM=linux Aug 13 00:25:12.992997 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:25:12.993007 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:25:12.993019 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:25:12.993029 systemd[1]: Detected virtualization microsoft. Aug 13 00:25:12.993040 systemd[1]: Detected architecture x86-64. Aug 13 00:25:12.993049 systemd[1]: Running in initrd. Aug 13 00:25:12.993058 systemd[1]: No hostname configured, using default hostname. Aug 13 00:25:12.993067 systemd[1]: Hostname set to . Aug 13 00:25:12.993076 systemd[1]: Initializing machine ID from random generator. Aug 13 00:25:12.993085 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:25:12.993095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:25:12.993104 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:25:12.993116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:25:12.993125 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:25:12.993134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:25:12.993144 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:25:12.993155 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:25:12.993164 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:25:12.993173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:25:12.993183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:25:12.993192 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:25:12.993201 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:25:12.993210 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:25:12.993219 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:25:12.993228 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:25:12.993236 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:25:12.993245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:25:12.993254 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:25:12.993264 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:25:12.993273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:25:12.993282 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:25:12.993292 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:25:12.993300 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:25:12.993309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:25:12.993318 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:25:12.993327 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:25:12.993338 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:25:12.993347 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:25:12.993356 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:25:12.993372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:25:12.993382 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:25:12.993410 systemd-journald[205]: Collecting audit messages is disabled. Aug 13 00:25:12.993435 systemd-journald[205]: Journal started Aug 13 00:25:12.993457 systemd-journald[205]: Runtime Journal (/run/log/journal/583da1d7ee8442e08902fd7832aa1ce7) is 8M, max 158.9M, 150.9M free. Aug 13 00:25:12.996050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:25:12.991797 systemd-modules-load[206]: Inserted module 'overlay' Aug 13 00:25:13.003245 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:25:13.002331 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:25:13.004650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:25:13.009652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:25:13.025901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:25:13.030446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:25:13.037765 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:25:13.043580 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:25:13.048505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:25:13.048526 kernel: Bridge firewalling registered Aug 13 00:25:13.048895 systemd-modules-load[206]: Inserted module 'br_netfilter' Aug 13 00:25:13.048947 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:25:13.053148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:25:13.054534 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:25:13.057634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:25:13.072292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:25:13.077698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:25:13.080657 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:25:13.086681 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:25:13.092351 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:25:13.116232 systemd-resolved[244]: Positive Trust Anchors: Aug 13 00:25:13.116244 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:25:13.122499 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:25:13.116280 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:25:13.126866 systemd-resolved[244]: Defaulting to hostname 'linux'. Aug 13 00:25:13.127646 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:25:13.138522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:25:13.186558 kernel: SCSI subsystem initialized Aug 13 00:25:13.194557 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:25:13.203558 kernel: iscsi: registered transport (tcp) Aug 13 00:25:13.221849 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:25:13.221887 kernel: QLogic iSCSI HBA Driver Aug 13 00:25:13.233683 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:25:13.244441 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:25:13.248930 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:25:13.277146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:25:13.281404 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:25:13.322557 kernel: raid6: avx512x4 gen() 45260 MB/s Aug 13 00:25:13.340554 kernel: raid6: avx512x2 gen() 44566 MB/s Aug 13 00:25:13.357552 kernel: raid6: avx512x1 gen() 27561 MB/s Aug 13 00:25:13.375553 kernel: raid6: avx2x4 gen() 39024 MB/s Aug 13 00:25:13.393554 kernel: raid6: avx2x2 gen() 38305 MB/s Aug 13 00:25:13.411678 kernel: raid6: avx2x1 gen() 30897 MB/s Aug 13 00:25:13.411692 kernel: raid6: using algorithm avx512x4 gen() 45260 MB/s Aug 13 00:25:13.429955 kernel: raid6: .... xor() 7590 MB/s, rmw enabled Aug 13 00:25:13.429979 kernel: raid6: using avx512x2 recovery algorithm Aug 13 00:25:13.448570 kernel: xor: automatically using best checksumming function avx Aug 13 00:25:13.564559 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:25:13.568366 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:25:13.571592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:25:13.593440 systemd-udevd[455]: Using default interface naming scheme 'v255'. Aug 13 00:25:13.598195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:25:13.603233 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:25:13.620619 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Aug 13 00:25:13.638428 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:25:13.642376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:25:13.689980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:25:13.695674 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:25:13.735570 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:25:13.752554 kernel: AES CTR mode by8 optimization enabled Aug 13 00:25:13.764779 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 00:25:13.768595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:25:13.776622 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:25:13.768814 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:25:13.771651 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:25:13.784775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:25:13.792006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:25:13.794675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:25:13.804978 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 00:25:13.805000 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:25:13.807634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:25:13.815143 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:25:13.815306 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:25:13.823556 kernel: PTP clock support registered Aug 13 00:25:13.827565 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Aug 13 00:25:13.838559 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Aug 13 00:25:13.842176 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Aug 13 00:25:13.842486 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:25:13.842501 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:25:13.842673 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:25:13.852645 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Aug 13 00:25:13.855673 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Aug 13 00:25:13.864084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:25:13.869630 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:25:13.869666 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:25:13.878992 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:25:13.879047 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:25:13.879062 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Aug 13 00:25:13.879567 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:25:13.654372 systemd-resolved[244]: Clock change detected. Flushing caches. Aug 13 00:25:13.665857 systemd-journald[205]: Time jumped backwards, rotating. Aug 13 00:25:13.665901 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:25:13.669871 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Aug 13 00:25:13.670040 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:25:13.672044 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:25:13.677639 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52212189 (unnamed net_device) (uninitialized): VF slot 1 added Aug 13 00:25:13.677802 kernel: scsi host0: storvsc_host_t Aug 13 00:25:13.680539 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 00:25:13.680564 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:25:13.683032 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Aug 13 00:25:13.697296 kernel: nvme nvme0: pci function c05b:00:00.0 Aug 13 00:25:13.700689 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Aug 13 00:25:13.857063 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 00:25:13.861027 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:25:13.866235 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:25:13.866458 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:25:13.868069 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:25:13.882252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 13 00:25:13.898034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 13 00:25:13.919141 kernel: nvme nvme0: using unchecked data buffer Aug 13 00:25:13.969513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Aug 13 00:25:13.990906 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Aug 13 00:25:14.025940 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Aug 13 00:25:14.029329 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Aug 13 00:25:14.033711 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:25:14.080664 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Aug 13 00:25:14.164768 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:25:14.167352 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:25:14.171337 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:25:14.174051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:25:14.178114 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:25:14.197830 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:25:14.703023 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Aug 13 00:25:14.703220 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Aug 13 00:25:14.705737 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Aug 13 00:25:14.707363 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:25:14.712025 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Aug 13 00:25:14.716176 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Aug 13 00:25:14.721025 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Aug 13 00:25:14.721053 kernel: pci 7870:00:00.0: enabling Extended Tags Aug 13 00:25:14.736823 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:25:14.737023 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Aug 13 00:25:14.741062 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Aug 13 00:25:14.746141 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Aug 13 00:25:14.754025 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Aug 13 00:25:14.757164 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52212189 eth0: VF registering: eth1 Aug 13 00:25:14.757324 kernel: mana 7870:00:00.0 eth1: joined to eth0 Aug 13 00:25:14.762031 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Aug 13 00:25:15.056732 disk-uuid[665]: The operation has completed successfully. Aug 13 00:25:15.059828 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:25:15.103130 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:25:15.103219 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:25:15.142416 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:25:15.153096 sh[715]: Success Aug 13 00:25:15.193555 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:25:15.193599 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:25:15.196031 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:25:15.203028 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 00:25:15.627433 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:25:15.632965 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:25:15.651179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:25:15.661383 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:25:15.661509 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (728) Aug 13 00:25:15.663050 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:25:15.665154 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:25:15.667024 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:25:16.384458 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:25:16.386216 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:25:16.387420 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:25:16.389214 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:25:16.393992 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:25:16.428130 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (761) Aug 13 00:25:16.434767 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:25:16.434803 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:25:16.434814 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 13 00:25:16.460029 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:25:16.460243 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:25:16.465867 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:25:16.481250 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:25:16.485597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:25:16.516939 systemd-networkd[897]: lo: Link UP Aug 13 00:25:16.516947 systemd-networkd[897]: lo: Gained carrier Aug 13 00:25:16.522088 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Aug 13 00:25:16.518246 systemd-networkd[897]: Enumeration completed Aug 13 00:25:16.527414 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Aug 13 00:25:16.518572 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:25:16.533098 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52212189 eth0: Data path switched to VF: enP30832s1 Aug 13 00:25:16.518575 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:25:16.520359 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:25:16.525166 systemd[1]: Reached target network.target - Network. Aug 13 00:25:16.529656 systemd-networkd[897]: enP30832s1: Link UP Aug 13 00:25:16.529721 systemd-networkd[897]: eth0: Link UP Aug 13 00:25:16.529859 systemd-networkd[897]: eth0: Gained carrier Aug 13 00:25:16.529870 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:25:16.532348 systemd-networkd[897]: enP30832s1: Gained carrier Aug 13 00:25:16.543058 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.5/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 13 00:25:18.350210 ignition[876]: Ignition 2.21.0 Aug 13 00:25:18.350223 ignition[876]: Stage: fetch-offline Aug 13 00:25:18.352654 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:25:18.350304 ignition[876]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:25:18.350311 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:25:18.359958 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:25:18.350394 ignition[876]: parsed url from cmdline: "" Aug 13 00:25:18.350397 ignition[876]: no config URL provided Aug 13 00:25:18.350402 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:25:18.350409 ignition[876]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:25:18.350413 ignition[876]: failed to fetch config: resource requires networking Aug 13 00:25:18.350562 ignition[876]: Ignition finished successfully Aug 13 00:25:18.384972 ignition[907]: Ignition 2.21.0 Aug 13 00:25:18.384978 ignition[907]: Stage: fetch Aug 13 00:25:18.385205 ignition[907]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:25:18.385214 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:25:18.385301 ignition[907]: parsed url from cmdline: "" Aug 13 00:25:18.385304 ignition[907]: no config URL provided Aug 13 00:25:18.385308 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:25:18.385315 ignition[907]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:25:18.385357 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:25:18.459767 ignition[907]: GET result: OK Aug 13 00:25:18.460693 ignition[907]: config has been read from IMDS userdata Aug 13 00:25:18.460729 ignition[907]: parsing config with SHA512: 489abf49bb71826278c486e4631603041e73ed166d81ed58c463f3bb283a44dea4ac887da49a0416ab12b32c7c92151ce67ee92b6ba4a8830ee31f89419c416a Aug 13 00:25:18.468138 unknown[907]: fetched base config from "system" Aug 13 00:25:18.468400 ignition[907]: fetch: fetch complete Aug 13 00:25:18.468159 unknown[907]: fetched base config from "system" Aug 13 00:25:18.468403 ignition[907]: fetch: fetch passed Aug 13 00:25:18.468162 unknown[907]: fetched user config from "azure" Aug 13 00:25:18.468428 ignition[907]: Ignition finished successfully Aug 13 00:25:18.472157 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:25:18.477032 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:25:18.498223 ignition[913]: Ignition 2.21.0 Aug 13 00:25:18.498233 ignition[913]: Stage: kargs Aug 13 00:25:18.498434 ignition[913]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:25:18.500978 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:25:18.498442 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:25:18.504656 systemd-networkd[897]: eth0: Gained IPv6LL Aug 13 00:25:18.499238 ignition[913]: kargs: kargs passed Aug 13 00:25:18.507103 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:25:18.499270 ignition[913]: Ignition finished successfully Aug 13 00:25:18.520393 ignition[920]: Ignition 2.21.0 Aug 13 00:25:18.520402 ignition[920]: Stage: disks Aug 13 00:25:18.520564 ignition[920]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:25:18.522958 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:25:18.520571 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:25:18.524833 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:25:18.522232 ignition[920]: disks: disks passed Aug 13 00:25:18.526722 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:25:18.522259 ignition[920]: Ignition finished successfully Aug 13 00:25:18.536058 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:25:18.536255 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:25:18.543050 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:25:18.545115 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:25:18.641165 systemd-fsck[928]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Aug 13 00:25:18.644657 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:25:18.649874 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:25:19.082029 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:25:19.082121 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:25:19.086436 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:25:19.116837 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:25:19.131111 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:25:19.139110 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:25:19.143498 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (937) Aug 13 00:25:19.147475 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:25:19.147500 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:25:19.147511 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 13 00:25:19.147941 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:25:19.147977 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:25:19.154314 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:25:19.158154 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:25:19.164122 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:25:20.318890 coreos-metadata[939]: Aug 13 00:25:20.318 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:25:20.322854 coreos-metadata[939]: Aug 13 00:25:20.322 INFO Fetch successful Aug 13 00:25:20.325093 coreos-metadata[939]: Aug 13 00:25:20.323 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:25:20.334406 coreos-metadata[939]: Aug 13 00:25:20.334 INFO Fetch successful Aug 13 00:25:20.337465 coreos-metadata[939]: Aug 13 00:25:20.337 INFO wrote hostname ci-4372.1.0-a-4baf804050 to /sysroot/etc/hostname Aug 13 00:25:20.339296 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:25:20.465801 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:25:20.502602 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:25:20.538036 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:25:20.627410 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:25:21.852596 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:25:21.854708 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:25:21.860003 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:25:21.869002 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:25:21.873192 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:25:21.891361 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:25:21.900771 ignition[1056]: INFO : Ignition 2.21.0 Aug 13 00:25:21.900771 ignition[1056]: INFO : Stage: mount Aug 13 00:25:21.904189 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:25:21.904189 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:25:21.904189 ignition[1056]: INFO : mount: mount passed Aug 13 00:25:21.904189 ignition[1056]: INFO : Ignition finished successfully Aug 13 00:25:21.906062 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:25:21.910437 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:25:21.924115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:25:21.948025 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1068) Aug 13 00:25:21.948056 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:25:21.950041 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:25:21.951396 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 13 00:25:21.955638 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:25:21.979540 ignition[1085]: INFO : Ignition 2.21.0 Aug 13 00:25:21.979540 ignition[1085]: INFO : Stage: files Aug 13 00:25:21.983002 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:25:21.983002 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:25:21.983002 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:25:21.983002 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:25:21.983002 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:25:22.016993 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:25:22.018942 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:25:22.018942 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:25:22.017315 unknown[1085]: wrote ssh authorized keys file for user: core Aug 13 00:25:22.047772 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:25:22.050397 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:25:22.086510 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:25:22.305351 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:25:22.305351 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:25:22.311394 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:25:22.513119 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:25:22.700793 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:25:22.700793 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:25:22.708093 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:25:23.171418 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:25:23.758513 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:25:23.758513 ignition[1085]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:25:23.881429 ignition[1085]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:25:23.889294 ignition[1085]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:25:23.889294 ignition[1085]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:25:23.896063 ignition[1085]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:25:23.896063 ignition[1085]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:25:23.896063 ignition[1085]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:25:23.896063 ignition[1085]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:25:23.896063 ignition[1085]: INFO : files: files passed Aug 13 00:25:23.896063 ignition[1085]: INFO : Ignition finished successfully Aug 13 00:25:23.893350 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:25:23.897299 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:25:23.905721 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:25:23.910303 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:25:23.910408 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:25:23.940234 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:25:23.940234 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:25:23.946079 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:25:23.945311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:25:23.949307 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:25:23.954687 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:25:23.990309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:25:23.990392 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:25:23.995394 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:25:24.000087 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:25:24.004112 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:25:24.004744 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:25:24.027190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:25:24.030136 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:25:24.051953 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:25:24.058134 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:25:24.058768 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:25:24.063067 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:25:24.064269 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:25:24.070161 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:25:24.073150 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:25:24.077177 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:25:24.078035 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:25:24.078396 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:25:24.078604 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:25:24.078790 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:25:24.087179 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:25:24.092187 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:25:24.097170 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:25:24.099765 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:25:24.104145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:25:24.104274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:25:24.108340 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:25:24.112164 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:25:24.115314 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:25:24.115556 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:25:24.115892 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:25:24.155446 ignition[1139]: INFO : Ignition 2.21.0 Aug 13 00:25:24.155446 ignition[1139]: INFO : Stage: umount Aug 13 00:25:24.116039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:25:24.161038 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:25:24.161038 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:25:24.161038 ignition[1139]: INFO : umount: umount passed Aug 13 00:25:24.161038 ignition[1139]: INFO : Ignition finished successfully Aug 13 00:25:24.116581 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:25:24.116704 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:25:24.116944 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:25:24.117066 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:25:24.117334 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:25:24.117424 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:25:24.129384 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:25:24.135344 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:25:24.143679 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:25:24.143840 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:25:24.150722 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:25:24.150836 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:25:24.163266 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:25:24.163345 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:25:24.168653 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:25:24.168857 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:25:24.170301 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:25:24.170341 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:25:24.170557 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:25:24.170583 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:25:24.170784 systemd[1]: Stopped target network.target - Network. Aug 13 00:25:24.170806 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:25:24.170829 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:25:24.171061 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:25:24.171098 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:25:24.173632 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:25:24.185168 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:25:24.195666 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:25:24.225084 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:25:24.225135 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:25:24.229076 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:25:24.229111 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:25:24.233067 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:25:24.233112 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:25:24.235656 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:25:24.235689 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:25:24.236343 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:25:24.252935 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:25:24.254578 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:25:24.255126 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:25:24.255210 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:25:24.258584 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:25:24.258801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:25:24.258880 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:25:24.263313 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:25:24.263396 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:25:24.266991 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:25:24.267198 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:25:24.267270 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:25:24.273505 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:25:24.278114 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:25:24.278148 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:25:24.280934 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:25:24.280978 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:25:24.285575 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:25:24.292617 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:25:24.292664 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:25:24.306293 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:25:24.306341 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:25:24.306937 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:25:24.306971 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:25:24.307189 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:25:24.307221 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:25:24.309076 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:25:24.334149 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52212189 eth0: Data path switched from VF: enP30832s1 Aug 13 00:25:24.334311 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Aug 13 00:25:24.310210 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:25:24.310261 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:25:24.332701 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:25:24.332837 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:25:24.338879 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:25:24.338953 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:25:24.341350 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:25:24.341408 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:25:24.342220 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:25:24.342246 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:25:24.342417 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:25:24.342449 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:25:24.349995 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:25:24.350049 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:25:24.355055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:25:24.355100 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:25:24.359777 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:25:24.364073 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:25:24.364124 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:25:24.368292 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:25:24.368339 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:25:24.370794 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:25:24.370831 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:25:24.373117 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:25:24.373150 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:25:24.377133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:25:24.377186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:25:24.384737 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 00:25:24.384791 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 00:25:24.384825 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:25:24.384860 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:25:24.385153 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:25:24.385230 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:25:24.389375 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:25:24.396459 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:25:24.450045 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Aug 13 00:25:24.410392 systemd[1]: Switching root. Aug 13 00:25:24.450830 systemd-journald[205]: Journal stopped Aug 13 00:25:28.505624 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:25:28.505649 kernel: SELinux: policy capability open_perms=1 Aug 13 00:25:28.505659 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:25:28.505667 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:25:28.505675 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:25:28.505683 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:25:28.505692 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:25:28.505701 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:25:28.505709 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:25:28.505717 kernel: audit: type=1403 audit(1755044725.439:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:25:28.505728 systemd[1]: Successfully loaded SELinux policy in 59.128ms. Aug 13 00:25:28.505738 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.690ms. Aug 13 00:25:28.505748 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:25:28.505758 systemd[1]: Detected virtualization microsoft. Aug 13 00:25:28.505769 systemd[1]: Detected architecture x86-64. Aug 13 00:25:28.505778 systemd[1]: Detected first boot. Aug 13 00:25:28.505787 systemd[1]: Hostname set to . Aug 13 00:25:28.505796 systemd[1]: Initializing machine ID from random generator. Aug 13 00:25:28.505806 zram_generator::config[1182]: No configuration found. Aug 13 00:25:28.505817 kernel: Guest personality initialized and is inactive Aug 13 00:25:28.505825 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Aug 13 00:25:28.505833 kernel: Initialized host personality Aug 13 00:25:28.505841 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:25:28.505850 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:25:28.505860 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:25:28.505869 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:25:28.505879 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:25:28.505887 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:25:28.505896 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:25:28.505906 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:25:28.505914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:25:28.505924 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:25:28.505933 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:25:28.505945 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:25:28.505955 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:25:28.505965 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:25:28.505976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:25:28.505986 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:25:28.505995 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:25:28.506006 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:25:28.506033 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:25:28.506044 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:25:28.506055 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:25:28.506064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:25:28.506072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:25:28.506081 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:25:28.506091 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:25:28.506101 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:25:28.506111 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:25:28.506123 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:25:28.506133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:25:28.506143 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:25:28.506153 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:25:28.506163 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:25:28.506172 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:25:28.506184 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:25:28.506194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:25:28.506226 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:25:28.506235 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:25:28.506244 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:25:28.506253 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:25:28.506262 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:25:28.506272 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:25:28.506282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:25:28.506291 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:25:28.506301 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:25:28.506311 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:25:28.506320 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:25:28.506330 systemd[1]: Reached target machines.target - Containers. Aug 13 00:25:28.506340 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:25:28.506352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:25:28.506363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:25:28.506373 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:25:28.506382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:25:28.506392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:25:28.506402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:25:28.506412 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:25:28.506422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:25:28.506434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:25:28.506444 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:25:28.506454 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:25:28.506465 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:25:28.506477 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:25:28.506488 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:25:28.506498 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:25:28.506508 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:25:28.506519 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:25:28.506530 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:25:28.506540 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:25:28.506551 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:25:28.506561 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:25:28.506571 systemd[1]: Stopped verity-setup.service. Aug 13 00:25:28.506581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:25:28.506591 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:25:28.506601 kernel: loop: module loaded Aug 13 00:25:28.506612 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:25:28.506623 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:25:28.506633 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:25:28.506643 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:25:28.506653 kernel: fuse: init (API version 7.41) Aug 13 00:25:28.506662 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:25:28.506672 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:25:28.506700 systemd-journald[1265]: Collecting audit messages is disabled. Aug 13 00:25:28.506725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:25:28.506735 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:25:28.506746 systemd-journald[1265]: Journal started Aug 13 00:25:28.506769 systemd-journald[1265]: Runtime Journal (/run/log/journal/cf6fe76560c144449c1c97c60ab40ced) is 8M, max 158.9M, 150.9M free. Aug 13 00:25:28.039060 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:25:28.049494 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 00:25:28.049879 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:25:28.514067 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:25:28.515494 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:25:28.517535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:25:28.517683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:25:28.521284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:25:28.521488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:25:28.522972 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:25:28.523140 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:25:28.526264 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:25:28.526389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:25:28.529260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:25:28.530797 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:25:28.534240 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:25:28.543855 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:25:28.546272 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:25:28.552104 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:25:28.554513 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:25:28.554548 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:25:28.557278 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:25:28.564125 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:25:28.566640 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:25:28.572363 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:25:28.578149 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:25:28.581110 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:25:28.581933 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:25:28.584180 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:25:28.586130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:25:28.589497 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:25:28.598484 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:25:28.604459 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:25:28.606715 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:25:28.610175 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:25:28.615307 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:25:28.618701 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:25:28.625130 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:25:28.628343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:25:28.649697 systemd-journald[1265]: Time spent on flushing to /var/log/journal/cf6fe76560c144449c1c97c60ab40ced is 109.181ms for 991 entries. Aug 13 00:25:28.649697 systemd-journald[1265]: System Journal (/var/log/journal/cf6fe76560c144449c1c97c60ab40ced) is 11.8M, max 2.6G, 2.6G free. Aug 13 00:25:28.816600 systemd-journald[1265]: Received client request to flush runtime journal. Aug 13 00:25:28.816635 kernel: ACPI: bus type drm_connector registered Aug 13 00:25:28.816647 systemd-journald[1265]: /var/log/journal/cf6fe76560c144449c1c97c60ab40ced/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Aug 13 00:25:28.816662 systemd-journald[1265]: Rotating system journal. Aug 13 00:25:28.816675 kernel: loop0: detected capacity change from 0 to 28496 Aug 13 00:25:28.660288 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:25:28.660436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:25:28.787307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:25:28.794002 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:25:28.817301 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:25:28.886857 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Aug 13 00:25:28.886873 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Aug 13 00:25:28.889825 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:25:28.894139 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:25:28.947099 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:25:28.950869 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:25:28.974738 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Aug 13 00:25:28.974973 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Aug 13 00:25:28.978722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:25:29.051220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:25:29.496033 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:25:29.539033 kernel: loop1: detected capacity change from 0 to 146240 Aug 13 00:25:29.673027 kernel: loop2: detected capacity change from 0 to 221472 Aug 13 00:25:29.709111 kernel: loop3: detected capacity change from 0 to 113872 Aug 13 00:25:29.796043 kernel: loop4: detected capacity change from 0 to 28496 Aug 13 00:25:29.807031 kernel: loop5: detected capacity change from 0 to 146240 Aug 13 00:25:29.821027 kernel: loop6: detected capacity change from 0 to 221472 Aug 13 00:25:29.837033 kernel: loop7: detected capacity change from 0 to 113872 Aug 13 00:25:29.848041 (sd-merge)[1350]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 13 00:25:29.848424 (sd-merge)[1350]: Merged extensions into '/usr'. Aug 13 00:25:29.851915 systemd[1]: Reload requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:25:29.851929 systemd[1]: Reloading... Aug 13 00:25:29.911103 zram_generator::config[1381]: No configuration found. Aug 13 00:25:29.997162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:25:30.097274 systemd[1]: Reloading finished in 244 ms. Aug 13 00:25:30.122975 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:25:30.129818 systemd[1]: Starting ensure-sysext.service... Aug 13 00:25:30.134134 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:25:30.156539 systemd[1]: Reload requested from client PID 1434 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:25:30.156554 systemd[1]: Reloading... Aug 13 00:25:30.185220 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:25:30.185250 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:25:30.185499 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:25:30.185692 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:25:30.186345 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:25:30.188125 systemd-tmpfiles[1435]: ACLs are not supported, ignoring. Aug 13 00:25:30.188198 systemd-tmpfiles[1435]: ACLs are not supported, ignoring. Aug 13 00:25:30.196699 systemd-tmpfiles[1435]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:25:30.196714 systemd-tmpfiles[1435]: Skipping /boot Aug 13 00:25:30.211256 systemd-tmpfiles[1435]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:25:30.211267 systemd-tmpfiles[1435]: Skipping /boot Aug 13 00:25:30.227044 zram_generator::config[1460]: No configuration found. Aug 13 00:25:30.301194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:25:30.373088 systemd[1]: Reloading finished in 215 ms. Aug 13 00:25:30.398578 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:25:30.413351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:25:30.421867 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:25:30.434727 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:25:30.438201 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:25:30.448195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:25:30.456479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:25:30.462099 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:25:30.467522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:25:30.467703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:25:30.472232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:25:30.476933 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:25:30.485554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:25:30.489154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:25:30.489264 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:25:30.489353 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:25:30.496863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:25:30.497091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:25:30.497273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:25:30.497397 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:25:30.499504 systemd-udevd[1534]: Using default interface naming scheme 'v255'. Aug 13 00:25:30.500885 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:25:30.502526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:25:30.504186 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:25:30.513188 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Aug 13 00:25:30.516248 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:25:30.516505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:25:30.519142 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:25:30.521914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:25:30.522057 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:25:30.522247 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:25:30.524342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:25:30.526232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:25:30.526406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:25:30.529316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:25:30.530049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:25:30.532850 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:25:30.533006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:25:30.536963 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:25:30.538266 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:25:30.538626 systemd[1]: Finished ensure-sysext.service. Aug 13 00:25:30.542435 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:25:30.554875 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:25:30.555037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:25:30.556842 augenrules[1560]: No rules Aug 13 00:25:30.557412 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:25:30.557593 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:25:30.572023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:25:30.580034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:25:30.591971 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:25:30.726958 systemd-networkd[1583]: lo: Link UP Aug 13 00:25:30.726971 systemd-networkd[1583]: lo: Gained carrier Aug 13 00:25:30.727866 systemd-networkd[1583]: Enumeration completed Aug 13 00:25:30.727942 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:25:30.732126 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:25:30.736976 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:25:30.739264 systemd-resolved[1528]: Positive Trust Anchors: Aug 13 00:25:30.739490 systemd-resolved[1528]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:25:30.739571 systemd-resolved[1528]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:25:30.754560 systemd-resolved[1528]: Using system hostname 'ci-4372.1.0-a-4baf804050'. Aug 13 00:25:30.756193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:25:30.758499 systemd[1]: Reached target network.target - Network. Aug 13 00:25:30.760510 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:25:30.780135 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:25:30.819215 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:25:30.822108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#236 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 13 00:25:30.824029 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:25:30.826031 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:25:30.827040 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:25:30.832277 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:25:30.841126 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:25:30.852063 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:25:30.852007 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:25:30.852976 systemd-networkd[1583]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:25:30.855031 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Aug 13 00:25:30.858033 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Aug 13 00:25:30.860793 systemd-networkd[1583]: enP30832s1: Link UP Aug 13 00:25:30.861061 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52212189 eth0: Data path switched to VF: enP30832s1 Aug 13 00:25:30.862173 systemd-networkd[1583]: eth0: Link UP Aug 13 00:25:30.862243 systemd-networkd[1583]: eth0: Gained carrier Aug 13 00:25:30.862304 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:25:30.869268 systemd-networkd[1583]: enP30832s1: Gained carrier Aug 13 00:25:30.881766 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:25:30.881817 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:25:30.882067 systemd-networkd[1583]: eth0: DHCPv4 address 10.200.8.5/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 13 00:25:30.937378 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Aug 13 00:25:30.975201 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:25:30.996514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:25:30.996678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:25:31.002250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:25:31.038906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:25:31.039524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:25:31.047231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:25:31.230842 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:25:31.234762 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:25:31.281951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Aug 13 00:25:31.285193 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:25:31.307067 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Aug 13 00:25:31.390824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:25:31.854226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:25:32.712178 systemd-networkd[1583]: eth0: Gained IPv6LL Aug 13 00:25:32.714265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:25:32.717295 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:25:40.436084 ldconfig[1316]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:25:40.475197 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:25:40.479265 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:25:40.500338 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:25:40.504263 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:25:40.505620 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:25:40.507288 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:25:40.510144 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:25:40.511894 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:25:40.513271 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:25:40.516129 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:25:40.519061 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:25:40.519099 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:25:40.522078 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:25:40.524561 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:25:40.527273 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:25:40.532179 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:25:40.536207 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:25:40.537935 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:25:40.542456 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:25:40.545375 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:25:40.547652 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:25:40.549660 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:25:40.551045 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:25:40.554103 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:25:40.554128 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:25:40.555880 systemd[1]: Starting chronyd.service - NTP client/server... Aug 13 00:25:40.559916 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:25:40.565866 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:25:40.569116 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:25:40.572827 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:25:40.576797 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:25:40.580497 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:25:40.582359 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:25:40.588234 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:25:40.590472 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Aug 13 00:25:40.599853 jq[1680]: false Aug 13 00:25:40.592204 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Aug 13 00:25:40.595596 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Aug 13 00:25:40.597174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:25:40.603188 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:25:40.610754 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:25:40.612745 KVP[1686]: KVP starting; pid is:1686 Aug 13 00:25:40.616156 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:25:40.619772 KVP[1686]: KVP LIC Version: 3.1 Aug 13 00:25:40.620039 kernel: hv_utils: KVP IC version 4.0 Aug 13 00:25:40.621554 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:25:40.633155 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:25:40.642182 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:25:40.645241 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:25:40.645646 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:25:40.648217 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:25:40.655213 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:25:40.663707 (chronyd)[1675]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 13 00:25:40.668205 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:25:40.671737 chronyd[1704]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 13 00:25:40.672548 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:25:40.672710 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:25:40.675695 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:25:40.677706 jq[1699]: true Aug 13 00:25:40.678536 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:25:40.698253 jq[1708]: true Aug 13 00:25:40.778790 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Refreshing passwd entry cache Aug 13 00:25:40.777873 oslogin_cache_refresh[1685]: Refreshing passwd entry cache Aug 13 00:25:40.781814 extend-filesystems[1681]: Found /dev/nvme0n1p6 Aug 13 00:25:40.789476 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:25:40.798561 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Failure getting users, quitting Aug 13 00:25:40.798561 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:25:40.798561 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Refreshing group entry cache Aug 13 00:25:40.798423 oslogin_cache_refresh[1685]: Failure getting users, quitting Aug 13 00:25:40.798438 oslogin_cache_refresh[1685]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:25:40.798475 oslogin_cache_refresh[1685]: Refreshing group entry cache Aug 13 00:25:40.809035 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Failure getting groups, quitting Aug 13 00:25:40.809035 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:25:40.808723 oslogin_cache_refresh[1685]: Failure getting groups, quitting Aug 13 00:25:40.808732 oslogin_cache_refresh[1685]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:25:40.810349 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:25:40.810671 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:25:40.852158 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:25:40.853406 (ntainerd)[1741]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:25:40.854259 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:25:40.940860 chronyd[1704]: Timezone right/UTC failed leap second check, ignoring Aug 13 00:25:40.941059 chronyd[1704]: Loaded seccomp filter (level 2) Aug 13 00:25:40.943535 systemd[1]: Started chronyd.service - NTP client/server. Aug 13 00:25:40.948304 systemd-logind[1696]: New seat seat0. Aug 13 00:25:40.950191 systemd-logind[1696]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:25:40.950312 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:25:41.186255 extend-filesystems[1681]: Found /dev/nvme0n1p9 Aug 13 00:25:41.196229 update_engine[1697]: I20250813 00:25:41.196149 1697 main.cc:92] Flatcar Update Engine starting Aug 13 00:25:41.220813 tar[1706]: linux-amd64/helm Aug 13 00:25:41.233241 extend-filesystems[1681]: Checking size of /dev/nvme0n1p9 Aug 13 00:25:41.241979 extend-filesystems[1681]: Old size kept for /dev/nvme0n1p9 Aug 13 00:25:41.488977 update_engine[1697]: I20250813 00:25:41.388345 1697 update_check_scheduler.cc:74] Next update check in 4m46s Aug 13 00:25:41.243280 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:25:41.381274 dbus-daemon[1678]: [system] SELinux support is enabled Aug 13 00:25:41.243508 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:25:41.387817 dbus-daemon[1678]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:25:41.381401 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:25:41.385224 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:25:41.385250 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:25:41.388125 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:25:41.388146 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:25:41.392509 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:25:41.395524 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:25:41.547065 coreos-metadata[1677]: Aug 13 00:25:41.547 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:25:41.576990 coreos-metadata[1677]: Aug 13 00:25:41.550 INFO Fetch successful Aug 13 00:25:41.576990 coreos-metadata[1677]: Aug 13 00:25:41.550 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 13 00:25:41.576990 coreos-metadata[1677]: Aug 13 00:25:41.553 INFO Fetch successful Aug 13 00:25:41.576990 coreos-metadata[1677]: Aug 13 00:25:41.553 INFO Fetching http://168.63.129.16/machine/ee08295c-c330-4ebd-8462-1ea72947692d/2807226c%2Dbcb5%2D4bd6%2D9fd2%2Dd4bf4fa42093.%5Fci%2D4372.1.0%2Da%2D4baf804050?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 13 00:25:41.576990 coreos-metadata[1677]: Aug 13 00:25:41.554 INFO Fetch successful Aug 13 00:25:41.576990 coreos-metadata[1677]: Aug 13 00:25:41.554 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:25:41.576990 coreos-metadata[1677]: Aug 13 00:25:41.566 INFO Fetch successful Aug 13 00:25:41.608680 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:25:41.612557 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:25:42.434662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:25:42.444377 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:25:42.494269 locksmithd[1778]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:25:42.882132 sshd_keygen[1748]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:25:42.915297 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:25:42.920213 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:25:42.925248 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 13 00:25:42.944031 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:25:43.036805 kubelet[1789]: E0813 00:25:42.984976 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:25:42.944241 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:25:42.948673 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:25:42.963342 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:25:42.968600 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:25:42.973148 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:25:42.975380 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:25:42.985944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:25:42.986073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:25:42.986288 systemd[1]: kubelet.service: Consumed 967ms CPU time, 263.8M memory peak. Aug 13 00:25:43.058128 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 13 00:25:43.426912 tar[1706]: linux-amd64/LICENSE Aug 13 00:25:43.426912 tar[1706]: linux-amd64/README.md Aug 13 00:25:43.433415 bash[1732]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:25:43.434961 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:25:43.438174 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:25:43.442936 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:25:44.093960 containerd[1741]: time="2025-08-13T00:25:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:25:44.094979 containerd[1741]: time="2025-08-13T00:25:44.094936456Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:25:44.100426 containerd[1741]: time="2025-08-13T00:25:44.100394738Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.492µs" Aug 13 00:25:44.100426 containerd[1741]: time="2025-08-13T00:25:44.100417061Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:25:44.100521 containerd[1741]: time="2025-08-13T00:25:44.100433454Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:25:44.100572 containerd[1741]: time="2025-08-13T00:25:44.100559353Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:25:44.100600 containerd[1741]: time="2025-08-13T00:25:44.100571500Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:25:44.100600 containerd[1741]: time="2025-08-13T00:25:44.100588845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:25:44.100656 containerd[1741]: time="2025-08-13T00:25:44.100630833Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:25:44.100656 containerd[1741]: time="2025-08-13T00:25:44.100651651Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:25:44.100839 containerd[1741]: time="2025-08-13T00:25:44.100823857Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:25:44.100839 containerd[1741]: time="2025-08-13T00:25:44.100834724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:25:44.100881 containerd[1741]: time="2025-08-13T00:25:44.100843982Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:25:44.100881 containerd[1741]: time="2025-08-13T00:25:44.100850934Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:25:44.100917 containerd[1741]: time="2025-08-13T00:25:44.100905238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:25:44.101058 containerd[1741]: time="2025-08-13T00:25:44.101042949Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:25:44.101082 containerd[1741]: time="2025-08-13T00:25:44.101063472Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:25:44.101082 containerd[1741]: time="2025-08-13T00:25:44.101071901Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:25:44.101121 containerd[1741]: time="2025-08-13T00:25:44.101096638Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:25:44.101295 containerd[1741]: time="2025-08-13T00:25:44.101281532Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:25:44.101329 containerd[1741]: time="2025-08-13T00:25:44.101321770Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:25:44.592376 containerd[1741]: time="2025-08-13T00:25:44.592335103Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:25:44.592487 containerd[1741]: time="2025-08-13T00:25:44.592410204Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:25:44.592487 containerd[1741]: time="2025-08-13T00:25:44.592428388Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:25:44.592487 containerd[1741]: time="2025-08-13T00:25:44.592457458Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:25:44.592487 containerd[1741]: time="2025-08-13T00:25:44.592470435Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:25:44.592487 containerd[1741]: time="2025-08-13T00:25:44.592480461Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:25:44.592576 containerd[1741]: time="2025-08-13T00:25:44.592493263Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:25:44.592576 containerd[1741]: time="2025-08-13T00:25:44.592505751Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:25:44.592576 containerd[1741]: time="2025-08-13T00:25:44.592546180Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:25:44.592576 containerd[1741]: time="2025-08-13T00:25:44.592557098Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:25:44.592576 containerd[1741]: time="2025-08-13T00:25:44.592565906Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:25:44.592646 containerd[1741]: time="2025-08-13T00:25:44.592577662Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:25:44.592736 containerd[1741]: time="2025-08-13T00:25:44.592720483Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:25:44.592756 containerd[1741]: time="2025-08-13T00:25:44.592747244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:25:44.592771 containerd[1741]: time="2025-08-13T00:25:44.592762832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:25:44.592786 containerd[1741]: time="2025-08-13T00:25:44.592774568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:25:44.592803 containerd[1741]: time="2025-08-13T00:25:44.592785580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:25:44.592803 containerd[1741]: time="2025-08-13T00:25:44.592795160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:25:44.592842 containerd[1741]: time="2025-08-13T00:25:44.592805428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:25:44.592842 containerd[1741]: time="2025-08-13T00:25:44.592823438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:25:44.592842 containerd[1741]: time="2025-08-13T00:25:44.592834020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:25:44.592894 containerd[1741]: time="2025-08-13T00:25:44.592843703Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:25:44.592894 containerd[1741]: time="2025-08-13T00:25:44.592855303Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:25:44.592955 containerd[1741]: time="2025-08-13T00:25:44.592927637Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:25:44.592976 containerd[1741]: time="2025-08-13T00:25:44.592956304Z" level=info msg="Start snapshots syncer" Aug 13 00:25:44.593024 containerd[1741]: time="2025-08-13T00:25:44.592992103Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:25:44.593310 containerd[1741]: time="2025-08-13T00:25:44.593286767Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:25:44.593420 containerd[1741]: time="2025-08-13T00:25:44.593322910Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:25:44.593420 containerd[1741]: time="2025-08-13T00:25:44.593408188Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:25:44.593530 containerd[1741]: time="2025-08-13T00:25:44.593516168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:25:44.593569 containerd[1741]: time="2025-08-13T00:25:44.593535647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:25:44.593569 containerd[1741]: time="2025-08-13T00:25:44.593546637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:25:44.593569 containerd[1741]: time="2025-08-13T00:25:44.593556721Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:25:44.593569 containerd[1741]: time="2025-08-13T00:25:44.593566981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:25:44.593641 containerd[1741]: time="2025-08-13T00:25:44.593577489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:25:44.593641 containerd[1741]: time="2025-08-13T00:25:44.593588838Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:25:44.593641 containerd[1741]: time="2025-08-13T00:25:44.593614738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:25:44.593641 containerd[1741]: time="2025-08-13T00:25:44.593625045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:25:44.593641 containerd[1741]: time="2025-08-13T00:25:44.593634790Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:25:44.593740 containerd[1741]: time="2025-08-13T00:25:44.593660270Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:25:44.593740 containerd[1741]: time="2025-08-13T00:25:44.593675348Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:25:44.593740 containerd[1741]: time="2025-08-13T00:25:44.593684536Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:25:44.593740 containerd[1741]: time="2025-08-13T00:25:44.593694455Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:25:44.593740 containerd[1741]: time="2025-08-13T00:25:44.593702344Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:25:44.593852 containerd[1741]: time="2025-08-13T00:25:44.593743951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:25:44.593852 containerd[1741]: time="2025-08-13T00:25:44.593759457Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:25:44.593852 containerd[1741]: time="2025-08-13T00:25:44.593776310Z" level=info msg="runtime interface created" Aug 13 00:25:44.593852 containerd[1741]: time="2025-08-13T00:25:44.593781699Z" level=info msg="created NRI interface" Aug 13 00:25:44.593852 containerd[1741]: time="2025-08-13T00:25:44.593790401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:25:44.593852 containerd[1741]: time="2025-08-13T00:25:44.593801644Z" level=info msg="Connect containerd service" Aug 13 00:25:44.593852 containerd[1741]: time="2025-08-13T00:25:44.593824822Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:25:44.596228 containerd[1741]: time="2025-08-13T00:25:44.595906194Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:25:45.848745 containerd[1741]: time="2025-08-13T00:25:45.848667956Z" level=info msg="Start subscribing containerd event" Aug 13 00:25:45.849213 containerd[1741]: time="2025-08-13T00:25:45.848726246Z" level=info msg="Start recovering state" Aug 13 00:25:45.849213 containerd[1741]: time="2025-08-13T00:25:45.848885604Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:25:45.849213 containerd[1741]: time="2025-08-13T00:25:45.848934836Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:25:45.855641 containerd[1741]: time="2025-08-13T00:25:45.849331451Z" level=info msg="Start event monitor" Aug 13 00:25:45.855641 containerd[1741]: time="2025-08-13T00:25:45.849351265Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:25:45.855641 containerd[1741]: time="2025-08-13T00:25:45.849360615Z" level=info msg="Start streaming server" Aug 13 00:25:45.855641 containerd[1741]: time="2025-08-13T00:25:45.849387063Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:25:45.855641 containerd[1741]: time="2025-08-13T00:25:45.849395796Z" level=info msg="runtime interface starting up..." Aug 13 00:25:45.855641 containerd[1741]: time="2025-08-13T00:25:45.849401851Z" level=info msg="starting plugins..." Aug 13 00:25:45.855641 containerd[1741]: time="2025-08-13T00:25:45.849415042Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:25:45.849628 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:25:45.852828 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:25:45.856122 systemd[1]: Startup finished in 3.282s (kernel) + 12.873s (initrd) + 20.474s (userspace) = 36.630s. Aug 13 00:25:45.857996 containerd[1741]: time="2025-08-13T00:25:45.856387976Z" level=info msg="containerd successfully booted in 1.755879s" Aug 13 00:25:46.838279 login[1818]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Aug 13 00:25:46.839903 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:25:46.850638 systemd-logind[1696]: New session 1 of user core. Aug 13 00:25:46.851583 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:25:46.852584 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:25:46.874921 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:25:46.876888 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:25:46.889634 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:25:46.891504 systemd-logind[1696]: New session c1 of user core. Aug 13 00:25:47.073423 systemd[1852]: Queued start job for default target default.target. Aug 13 00:25:47.079745 systemd[1852]: Created slice app.slice - User Application Slice. Aug 13 00:25:47.079775 systemd[1852]: Reached target paths.target - Paths. Aug 13 00:25:47.079807 systemd[1852]: Reached target timers.target - Timers. Aug 13 00:25:47.080656 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:25:47.087553 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:25:47.087602 systemd[1852]: Reached target sockets.target - Sockets. Aug 13 00:25:47.087636 systemd[1852]: Reached target basic.target - Basic System. Aug 13 00:25:47.087704 systemd[1852]: Reached target default.target - Main User Target. Aug 13 00:25:47.087728 systemd[1852]: Startup finished in 191ms. Aug 13 00:25:47.087795 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:25:47.090661 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:25:47.236720 waagent[1827]: 2025-08-13T00:25:47.236657Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Aug 13 00:25:47.237205 waagent[1827]: 2025-08-13T00:25:47.236949Z INFO Daemon Daemon OS: flatcar 4372.1.0 Aug 13 00:25:47.237205 waagent[1827]: 2025-08-13T00:25:47.237189Z INFO Daemon Daemon Python: 3.11.12 Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.237401Z INFO Daemon Daemon Run daemon Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.237738Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.1.0' Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.237955Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.238130Z INFO Daemon Daemon Activate resource disk Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.238266Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.239804Z INFO Daemon Daemon Found device: None Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.239897Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.240210Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.240680Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:25:47.243079 waagent[1827]: 2025-08-13T00:25:47.240849Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:25:47.257710 waagent[1827]: 2025-08-13T00:25:47.250411Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 13 00:25:47.257710 waagent[1827]: 2025-08-13T00:25:47.251139Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:25:47.257710 waagent[1827]: 2025-08-13T00:25:47.251671Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:25:47.257710 waagent[1827]: 2025-08-13T00:25:47.251728Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:25:47.493899 waagent[1827]: 2025-08-13T00:25:47.493817Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:25:47.581876 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:25:47.584081 waagent[1827]: 2025-08-13T00:25:47.584003Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:25:47.593249 waagent[1827]: 2025-08-13T00:25:47.584222Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:25:47.593249 waagent[1827]: 2025-08-13T00:25:47.584657Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:25:47.593249 waagent[1827]: 2025-08-13T00:25:47.584876Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:25:47.593249 waagent[1827]: 2025-08-13T00:25:47.585036Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:25:47.593249 waagent[1827]: 2025-08-13T00:25:47.585168Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:25:47.596985 waagent[1827]: 2025-08-13T00:25:47.596949Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:25:47.598038 waagent[1827]: 2025-08-13T00:25:47.597272Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:25:47.598038 waagent[1827]: 2025-08-13T00:25:47.597350Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:25:47.838589 login[1818]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:25:47.845062 systemd-logind[1696]: New session 2 of user core. Aug 13 00:25:47.848139 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:25:47.884475 waagent[1827]: 2025-08-13T00:25:47.884415Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:25:47.885512 waagent[1827]: 2025-08-13T00:25:47.884628Z INFO Daemon Daemon Forcing an update of the goal state. Aug 13 00:25:47.894507 waagent[1827]: 2025-08-13T00:25:47.894467Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:25:47.903393 waagent[1827]: 2025-08-13T00:25:47.903364Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Aug 13 00:25:47.906803 waagent[1827]: 2025-08-13T00:25:47.903879Z INFO Daemon Aug 13 00:25:47.906803 waagent[1827]: 2025-08-13T00:25:47.904398Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b44752c8-2a6f-40d8-bad0-f2c432c10032 eTag: 3970843385702472171 source: Fabric] Aug 13 00:25:47.906803 waagent[1827]: 2025-08-13T00:25:47.904671Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 13 00:25:47.906803 waagent[1827]: 2025-08-13T00:25:47.905369Z INFO Daemon Aug 13 00:25:47.906803 waagent[1827]: 2025-08-13T00:25:47.905984Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:25:47.913342 waagent[1827]: 2025-08-13T00:25:47.913312Z INFO Daemon Daemon Downloading artifacts profile blob Aug 13 00:25:47.984342 waagent[1827]: 2025-08-13T00:25:47.984300Z INFO Daemon Downloaded certificate {'thumbprint': 'A3AFF32C360152400D6FB3254B63B476C3ABC475', 'hasPrivateKey': True} Aug 13 00:25:47.985625 waagent[1827]: 2025-08-13T00:25:47.984709Z INFO Daemon Fetch goal state completed Aug 13 00:25:47.996231 waagent[1827]: 2025-08-13T00:25:47.996172Z INFO Daemon Daemon Starting provisioning Aug 13 00:25:47.997068 waagent[1827]: 2025-08-13T00:25:47.996328Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:25:47.997068 waagent[1827]: 2025-08-13T00:25:47.996518Z INFO Daemon Daemon Set hostname [ci-4372.1.0-a-4baf804050] Aug 13 00:25:48.009545 waagent[1827]: 2025-08-13T00:25:48.009509Z INFO Daemon Daemon Publish hostname [ci-4372.1.0-a-4baf804050] Aug 13 00:25:48.013506 waagent[1827]: 2025-08-13T00:25:48.009786Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:25:48.013506 waagent[1827]: 2025-08-13T00:25:48.010105Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:25:48.017466 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:25:48.017472 systemd-networkd[1583]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:25:48.017493 systemd-networkd[1583]: eth0: DHCP lease lost Aug 13 00:25:48.018373 waagent[1827]: 2025-08-13T00:25:48.018328Z INFO Daemon Daemon Create user account if not exists Aug 13 00:25:48.018668 waagent[1827]: 2025-08-13T00:25:48.018515Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:25:48.039135 waagent[1827]: 2025-08-13T00:25:48.018674Z INFO Daemon Daemon Configure sudoer Aug 13 00:25:48.043042 systemd-networkd[1583]: eth0: DHCPv4 address 10.200.8.5/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 13 00:25:48.045163 waagent[1827]: 2025-08-13T00:25:48.045121Z INFO Daemon Daemon Configure sshd Aug 13 00:25:48.132318 waagent[1827]: 2025-08-13T00:25:48.132226Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 13 00:25:48.135113 waagent[1827]: 2025-08-13T00:25:48.132901Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:25:49.196023 waagent[1827]: 2025-08-13T00:25:49.195956Z INFO Daemon Daemon Provisioning complete Aug 13 00:25:49.204027 waagent[1827]: 2025-08-13T00:25:49.203996Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:25:49.204335 waagent[1827]: 2025-08-13T00:25:49.204193Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:25:49.204335 waagent[1827]: 2025-08-13T00:25:49.204370Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Aug 13 00:25:49.307318 waagent[1902]: 2025-08-13T00:25:49.307162Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Aug 13 00:25:49.307318 waagent[1902]: 2025-08-13T00:25:49.307258Z INFO ExtHandler ExtHandler OS: flatcar 4372.1.0 Aug 13 00:25:49.307318 waagent[1902]: 2025-08-13T00:25:49.307298Z INFO ExtHandler ExtHandler Python: 3.11.12 Aug 13 00:25:49.307632 waagent[1902]: 2025-08-13T00:25:49.307334Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Aug 13 00:25:49.317039 waagent[1902]: 2025-08-13T00:25:49.316985Z INFO ExtHandler ExtHandler Distro: flatcar-4372.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Aug 13 00:25:49.317167 waagent[1902]: 2025-08-13T00:25:49.317140Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:25:49.317215 waagent[1902]: 2025-08-13T00:25:49.317193Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:25:49.322259 waagent[1902]: 2025-08-13T00:25:49.322213Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:25:49.330659 waagent[1902]: 2025-08-13T00:25:49.330630Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:25:49.330943 waagent[1902]: 2025-08-13T00:25:49.330917Z INFO ExtHandler Aug 13 00:25:49.330984 waagent[1902]: 2025-08-13T00:25:49.330962Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: eab24326-1a4a-45ea-bac6-8f78db2c56ff eTag: 3970843385702472171 source: Fabric] Aug 13 00:25:49.331191 waagent[1902]: 2025-08-13T00:25:49.331166Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:25:49.331485 waagent[1902]: 2025-08-13T00:25:49.331462Z INFO ExtHandler Aug 13 00:25:49.331521 waagent[1902]: 2025-08-13T00:25:49.331498Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:25:49.339623 waagent[1902]: 2025-08-13T00:25:49.339590Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:25:49.423592 waagent[1902]: 2025-08-13T00:25:49.423542Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A3AFF32C360152400D6FB3254B63B476C3ABC475', 'hasPrivateKey': True} Aug 13 00:25:49.423921 waagent[1902]: 2025-08-13T00:25:49.423892Z INFO ExtHandler Fetch goal state completed Aug 13 00:25:49.433897 waagent[1902]: 2025-08-13T00:25:49.433854Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Aug 13 00:25:49.438126 waagent[1902]: 2025-08-13T00:25:49.438081Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1902 Aug 13 00:25:49.438231 waagent[1902]: 2025-08-13T00:25:49.438206Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 13 00:25:49.438455 waagent[1902]: 2025-08-13T00:25:49.438432Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Aug 13 00:25:49.439464 waagent[1902]: 2025-08-13T00:25:49.439431Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:25:49.439754 waagent[1902]: 2025-08-13T00:25:49.439728Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Aug 13 00:25:49.439861 waagent[1902]: 2025-08-13T00:25:49.439840Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Aug 13 00:25:49.440283 waagent[1902]: 2025-08-13T00:25:49.440256Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:25:49.445839 waagent[1902]: 2025-08-13T00:25:49.445815Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:25:49.445972 waagent[1902]: 2025-08-13T00:25:49.445950Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:25:49.451497 waagent[1902]: 2025-08-13T00:25:49.451126Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:25:49.456510 systemd[1]: Reload requested from client PID 1917 ('systemctl') (unit waagent.service)... Aug 13 00:25:49.456524 systemd[1]: Reloading... Aug 13 00:25:49.538042 zram_generator::config[1958]: No configuration found. Aug 13 00:25:49.608191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:25:49.614392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#220 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Aug 13 00:25:49.700696 systemd[1]: Reloading finished in 243 ms. Aug 13 00:25:49.713906 waagent[1902]: 2025-08-13T00:25:49.713207Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 13 00:25:49.713906 waagent[1902]: 2025-08-13T00:25:49.713330Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 13 00:25:49.915583 waagent[1902]: 2025-08-13T00:25:49.915528Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 13 00:25:49.915827 waagent[1902]: 2025-08-13T00:25:49.915802Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Aug 13 00:25:49.916535 waagent[1902]: 2025-08-13T00:25:49.916506Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:25:49.916759 waagent[1902]: 2025-08-13T00:25:49.916735Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:25:49.916900 waagent[1902]: 2025-08-13T00:25:49.916878Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:25:49.917284 waagent[1902]: 2025-08-13T00:25:49.917253Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:25:49.917338 waagent[1902]: 2025-08-13T00:25:49.917320Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:25:49.917456 waagent[1902]: 2025-08-13T00:25:49.917438Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:25:49.917490 waagent[1902]: 2025-08-13T00:25:49.917368Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:25:49.917529 waagent[1902]: 2025-08-13T00:25:49.917509Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:25:49.917720 waagent[1902]: 2025-08-13T00:25:49.917683Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:25:49.918070 waagent[1902]: 2025-08-13T00:25:49.917999Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:25:49.918468 waagent[1902]: 2025-08-13T00:25:49.918406Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:25:49.918468 waagent[1902]: 2025-08-13T00:25:49.918447Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:25:49.918579 waagent[1902]: 2025-08-13T00:25:49.918555Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:25:49.918910 waagent[1902]: 2025-08-13T00:25:49.918851Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:25:49.919005 waagent[1902]: 2025-08-13T00:25:49.918986Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:25:49.919005 waagent[1902]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:25:49.919005 waagent[1902]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:25:49.919005 waagent[1902]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:25:49.919005 waagent[1902]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:25:49.919005 waagent[1902]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:25:49.919005 waagent[1902]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:25:49.919665 waagent[1902]: 2025-08-13T00:25:49.919638Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:25:49.925047 waagent[1902]: 2025-08-13T00:25:49.924703Z INFO ExtHandler ExtHandler Aug 13 00:25:49.925047 waagent[1902]: 2025-08-13T00:25:49.924757Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3fc0d3d5-8c19-4ebf-809d-ae062767a19b correlation 0ccc605d-6364-4c5c-837a-75a22c63a4f7 created: 2025-08-13T00:24:31.266891Z] Aug 13 00:25:49.925268 waagent[1902]: 2025-08-13T00:25:49.925160Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:25:49.925765 waagent[1902]: 2025-08-13T00:25:49.925739Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Aug 13 00:25:49.931601 waagent[1902]: 2025-08-13T00:25:49.931560Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:25:49.931601 waagent[1902]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:25:49.931601 waagent[1902]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:25:49.931601 waagent[1902]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:21:21:89 brd ff:ff:ff:ff:ff:ff\ alias Network Device Aug 13 00:25:49.931601 waagent[1902]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:21:21:89 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Aug 13 00:25:49.931601 waagent[1902]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:25:49.931601 waagent[1902]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:25:49.931601 waagent[1902]: 2: eth0 inet 10.200.8.5/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:25:49.931601 waagent[1902]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:25:49.931601 waagent[1902]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 13 00:25:49.931601 waagent[1902]: 2: eth0 inet6 fe80::7e1e:52ff:fe21:2189/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 13 00:25:49.955984 waagent[1902]: 2025-08-13T00:25:49.955954Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Aug 13 00:25:49.955984 waagent[1902]: Try `iptables -h' or 'iptables --help' for more information.) Aug 13 00:25:49.956300 waagent[1902]: 2025-08-13T00:25:49.956276Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9E894E76-7561-4BA6-A5C8-0C7E3029A2A2;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Aug 13 00:25:50.259824 waagent[1902]: 2025-08-13T00:25:50.259760Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Aug 13 00:25:50.259824 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:25:50.259824 waagent[1902]: pkts bytes target prot opt in out source destination Aug 13 00:25:50.259824 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:25:50.259824 waagent[1902]: pkts bytes target prot opt in out source destination Aug 13 00:25:50.259824 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:25:50.259824 waagent[1902]: pkts bytes target prot opt in out source destination Aug 13 00:25:50.259824 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:25:50.259824 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:25:50.259824 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:25:50.262202 waagent[1902]: 2025-08-13T00:25:50.262150Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 13 00:25:50.262202 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:25:50.262202 waagent[1902]: pkts bytes target prot opt in out source destination Aug 13 00:25:50.262202 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:25:50.262202 waagent[1902]: pkts bytes target prot opt in out source destination Aug 13 00:25:50.262202 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:25:50.262202 waagent[1902]: pkts bytes target prot opt in out source destination Aug 13 00:25:50.262202 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:25:50.262202 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:25:50.262202 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:25:53.062868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:25:53.064263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:26:01.659203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:01.671201 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:26:01.700086 kubelet[2055]: E0813 00:26:01.700056 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:26:01.702442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:26:01.702600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:26:01.702884 systemd[1]: kubelet.service: Consumed 121ms CPU time, 108.6M memory peak. Aug 13 00:26:04.727518 chronyd[1704]: Selected source PHC0 Aug 13 00:26:11.813033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:26:11.814572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:26:12.448071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:12.452309 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:26:12.484913 kubelet[2070]: E0813 00:26:12.484877 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:26:12.486333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:26:12.486452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:26:12.486715 systemd[1]: kubelet.service: Consumed 119ms CPU time, 110.5M memory peak. Aug 13 00:26:18.136162 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:26:18.137184 systemd[1]: Started sshd@0-10.200.8.5:22-10.200.16.10:55338.service - OpenSSH per-connection server daemon (10.200.16.10:55338). Aug 13 00:26:18.787365 sshd[2078]: Accepted publickey for core from 10.200.16.10 port 55338 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:26:18.788500 sshd-session[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:18.792850 systemd-logind[1696]: New session 3 of user core. Aug 13 00:26:18.799143 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:26:19.029961 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 13 00:26:19.349438 systemd[1]: Started sshd@1-10.200.8.5:22-10.200.16.10:55344.service - OpenSSH per-connection server daemon (10.200.16.10:55344). Aug 13 00:26:19.980727 sshd[2083]: Accepted publickey for core from 10.200.16.10 port 55344 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:26:19.981773 sshd-session[2083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:19.985993 systemd-logind[1696]: New session 4 of user core. Aug 13 00:26:19.994176 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:26:20.422707 sshd[2085]: Connection closed by 10.200.16.10 port 55344 Aug 13 00:26:20.423240 sshd-session[2083]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:20.425679 systemd[1]: sshd@1-10.200.8.5:22-10.200.16.10:55344.service: Deactivated successfully. Aug 13 00:26:20.427283 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:26:20.428608 systemd-logind[1696]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:26:20.429654 systemd-logind[1696]: Removed session 4. Aug 13 00:26:20.546640 systemd[1]: Started sshd@2-10.200.8.5:22-10.200.16.10:47500.service - OpenSSH per-connection server daemon (10.200.16.10:47500). Aug 13 00:26:21.178911 sshd[2091]: Accepted publickey for core from 10.200.16.10 port 47500 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:26:21.179974 sshd-session[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:21.184312 systemd-logind[1696]: New session 5 of user core. Aug 13 00:26:21.199152 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:26:21.617229 sshd[2093]: Connection closed by 10.200.16.10 port 47500 Aug 13 00:26:21.617699 sshd-session[2091]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:21.620220 systemd[1]: sshd@2-10.200.8.5:22-10.200.16.10:47500.service: Deactivated successfully. Aug 13 00:26:21.621730 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:26:21.623452 systemd-logind[1696]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:26:21.624263 systemd-logind[1696]: Removed session 5. Aug 13 00:26:21.727488 systemd[1]: Started sshd@3-10.200.8.5:22-10.200.16.10:47502.service - OpenSSH per-connection server daemon (10.200.16.10:47502). Aug 13 00:26:22.360195 sshd[2099]: Accepted publickey for core from 10.200.16.10 port 47502 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:26:22.361202 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:22.365128 systemd-logind[1696]: New session 6 of user core. Aug 13 00:26:22.375134 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:26:22.562917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:26:22.564371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:26:22.804854 sshd[2101]: Connection closed by 10.200.16.10 port 47502 Aug 13 00:26:22.805437 sshd-session[2099]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:22.807729 systemd[1]: sshd@3-10.200.8.5:22-10.200.16.10:47502.service: Deactivated successfully. Aug 13 00:26:22.809169 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:26:22.810252 systemd-logind[1696]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:26:22.811501 systemd-logind[1696]: Removed session 6. Aug 13 00:26:22.915344 systemd[1]: Started sshd@4-10.200.8.5:22-10.200.16.10:47512.service - OpenSSH per-connection server daemon (10.200.16.10:47512). Aug 13 00:26:23.021789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:23.024671 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:26:23.057676 kubelet[2117]: E0813 00:26:23.057382 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:26:23.059035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:26:23.059163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:26:23.059466 systemd[1]: kubelet.service: Consumed 119ms CPU time, 108.3M memory peak. Aug 13 00:26:23.548230 sshd[2110]: Accepted publickey for core from 10.200.16.10 port 47512 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:26:23.549227 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:23.553440 systemd-logind[1696]: New session 7 of user core. Aug 13 00:26:23.559156 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:26:23.913373 sudo[2125]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:26:23.913593 sudo[2125]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:26:23.930859 sudo[2125]: pam_unix(sudo:session): session closed for user root Aug 13 00:26:24.035957 sshd[2124]: Connection closed by 10.200.16.10 port 47512 Aug 13 00:26:24.036543 sshd-session[2110]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:24.039685 systemd[1]: sshd@4-10.200.8.5:22-10.200.16.10:47512.service: Deactivated successfully. Aug 13 00:26:24.041197 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:26:24.041823 systemd-logind[1696]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:26:24.043119 systemd-logind[1696]: Removed session 7. Aug 13 00:26:24.154274 systemd[1]: Started sshd@5-10.200.8.5:22-10.200.16.10:47514.service - OpenSSH per-connection server daemon (10.200.16.10:47514). Aug 13 00:26:24.782349 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 47514 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:26:24.783457 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:24.787702 systemd-logind[1696]: New session 8 of user core. Aug 13 00:26:24.795149 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:26:25.125351 sudo[2135]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:26:25.125582 sudo[2135]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:26:25.130411 sudo[2135]: pam_unix(sudo:session): session closed for user root Aug 13 00:26:25.134356 sudo[2134]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:26:25.134582 sudo[2134]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:26:25.142112 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:26:25.170983 augenrules[2157]: No rules Aug 13 00:26:25.171862 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:26:25.172084 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:26:25.173144 sudo[2134]: pam_unix(sudo:session): session closed for user root Aug 13 00:26:25.278346 sshd[2133]: Connection closed by 10.200.16.10 port 47514 Aug 13 00:26:25.278755 sshd-session[2131]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:25.281566 systemd[1]: sshd@5-10.200.8.5:22-10.200.16.10:47514.service: Deactivated successfully. Aug 13 00:26:25.282858 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:26:25.283505 systemd-logind[1696]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:26:25.284524 systemd-logind[1696]: Removed session 8. Aug 13 00:26:25.394380 systemd[1]: Started sshd@6-10.200.8.5:22-10.200.16.10:47530.service - OpenSSH per-connection server daemon (10.200.16.10:47530). Aug 13 00:26:26.020506 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 47530 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:26:26.021494 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:26.025607 systemd-logind[1696]: New session 9 of user core. Aug 13 00:26:26.032128 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:26:26.226293 update_engine[1697]: I20250813 00:26:26.226231 1697 update_attempter.cc:509] Updating boot flags... Aug 13 00:26:26.367855 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:26:26.369327 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:26:27.463426 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:26:27.472295 (dockerd)[2222]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:26:27.687671 dockerd[2222]: time="2025-08-13T00:26:27.687626645Z" level=info msg="Starting up" Aug 13 00:26:27.689554 dockerd[2222]: time="2025-08-13T00:26:27.689525642Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:26:27.779855 dockerd[2222]: time="2025-08-13T00:26:27.779701353Z" level=info msg="Loading containers: start." Aug 13 00:26:27.792029 kernel: Initializing XFRM netlink socket Aug 13 00:26:27.957571 systemd-networkd[1583]: docker0: Link UP Aug 13 00:26:27.968000 dockerd[2222]: time="2025-08-13T00:26:27.967970756Z" level=info msg="Loading containers: done." Aug 13 00:26:27.982784 dockerd[2222]: time="2025-08-13T00:26:27.982756200Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:26:27.982881 dockerd[2222]: time="2025-08-13T00:26:27.982814638Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:26:27.982907 dockerd[2222]: time="2025-08-13T00:26:27.982893374Z" level=info msg="Initializing buildkit" Aug 13 00:26:28.015073 dockerd[2222]: time="2025-08-13T00:26:28.015039174Z" level=info msg="Completed buildkit initialization" Aug 13 00:26:28.020358 dockerd[2222]: time="2025-08-13T00:26:28.020318239Z" level=info msg="Daemon has completed initialization" Aug 13 00:26:28.020803 dockerd[2222]: time="2025-08-13T00:26:28.020372581Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:26:28.020520 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:26:29.122201 containerd[1741]: time="2025-08-13T00:26:29.122159970Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:26:29.873695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111820975.mount: Deactivated successfully. Aug 13 00:26:30.905666 containerd[1741]: time="2025-08-13T00:26:30.905621636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:30.907484 containerd[1741]: time="2025-08-13T00:26:30.907449991Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077767" Aug 13 00:26:30.910069 containerd[1741]: time="2025-08-13T00:26:30.910023825Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:30.913214 containerd[1741]: time="2025-08-13T00:26:30.913160393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:30.913982 containerd[1741]: time="2025-08-13T00:26:30.913802831Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.791603329s" Aug 13 00:26:30.913982 containerd[1741]: time="2025-08-13T00:26:30.913836503Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:26:30.914393 containerd[1741]: time="2025-08-13T00:26:30.914369764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:26:32.048743 containerd[1741]: time="2025-08-13T00:26:32.048698089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:32.050942 containerd[1741]: time="2025-08-13T00:26:32.050888919Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713253" Aug 13 00:26:32.053729 containerd[1741]: time="2025-08-13T00:26:32.053692771Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:32.057290 containerd[1741]: time="2025-08-13T00:26:32.057238500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:32.058133 containerd[1741]: time="2025-08-13T00:26:32.057787790Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.143392806s" Aug 13 00:26:32.058133 containerd[1741]: time="2025-08-13T00:26:32.057815600Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:26:32.058567 containerd[1741]: time="2025-08-13T00:26:32.058546021Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:26:33.062949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:26:33.066204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:26:33.375097 containerd[1741]: time="2025-08-13T00:26:33.374315967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:33.554062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:33.557264 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:26:33.585774 kubelet[2489]: E0813 00:26:33.585735 2489 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:26:33.587252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:26:33.587376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:26:33.587651 systemd[1]: kubelet.service: Consumed 118ms CPU time, 108.4M memory peak. Aug 13 00:26:33.623065 containerd[1741]: time="2025-08-13T00:26:33.623021075Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783708" Aug 13 00:26:33.625962 containerd[1741]: time="2025-08-13T00:26:33.625662506Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:33.629413 containerd[1741]: time="2025-08-13T00:26:33.629362652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:33.630411 containerd[1741]: time="2025-08-13T00:26:33.629998459Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.571422677s" Aug 13 00:26:33.630411 containerd[1741]: time="2025-08-13T00:26:33.630042453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:26:33.630600 containerd[1741]: time="2025-08-13T00:26:33.630581314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:26:34.428298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350140012.mount: Deactivated successfully. Aug 13 00:26:34.758924 containerd[1741]: time="2025-08-13T00:26:34.758829827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:34.760768 containerd[1741]: time="2025-08-13T00:26:34.760731857Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383620" Aug 13 00:26:34.765376 containerd[1741]: time="2025-08-13T00:26:34.765337006Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:34.768732 containerd[1741]: time="2025-08-13T00:26:34.768693949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:34.769095 containerd[1741]: time="2025-08-13T00:26:34.768976955Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.138371165s" Aug 13 00:26:34.769095 containerd[1741]: time="2025-08-13T00:26:34.769004803Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:26:34.769514 containerd[1741]: time="2025-08-13T00:26:34.769494232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:26:35.378441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004165412.mount: Deactivated successfully. Aug 13 00:26:36.203532 containerd[1741]: time="2025-08-13T00:26:36.203487004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:36.205656 containerd[1741]: time="2025-08-13T00:26:36.205624512Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Aug 13 00:26:36.208220 containerd[1741]: time="2025-08-13T00:26:36.208183954Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:36.211247 containerd[1741]: time="2025-08-13T00:26:36.211205878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:36.212024 containerd[1741]: time="2025-08-13T00:26:36.211799530Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.442280178s" Aug 13 00:26:36.212024 containerd[1741]: time="2025-08-13T00:26:36.211829940Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:26:36.212350 containerd[1741]: time="2025-08-13T00:26:36.212335235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:26:36.745526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983602913.mount: Deactivated successfully. Aug 13 00:26:36.759663 containerd[1741]: time="2025-08-13T00:26:36.759632214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:26:36.761739 containerd[1741]: time="2025-08-13T00:26:36.761707131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Aug 13 00:26:36.764115 containerd[1741]: time="2025-08-13T00:26:36.764075171Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:26:36.767376 containerd[1741]: time="2025-08-13T00:26:36.767336893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:26:36.768024 containerd[1741]: time="2025-08-13T00:26:36.767745388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 555.384644ms" Aug 13 00:26:36.768024 containerd[1741]: time="2025-08-13T00:26:36.767773008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:26:36.768254 containerd[1741]: time="2025-08-13T00:26:36.768235671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:26:37.170842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333950.mount: Deactivated successfully. Aug 13 00:26:38.738940 containerd[1741]: time="2025-08-13T00:26:38.738891122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:38.744579 containerd[1741]: time="2025-08-13T00:26:38.744543240Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Aug 13 00:26:38.746718 containerd[1741]: time="2025-08-13T00:26:38.746685557Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:38.750100 containerd[1741]: time="2025-08-13T00:26:38.750065050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:38.750924 containerd[1741]: time="2025-08-13T00:26:38.750806840Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.98254748s" Aug 13 00:26:38.750924 containerd[1741]: time="2025-08-13T00:26:38.750835861Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:26:40.630381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:40.630540 systemd[1]: kubelet.service: Consumed 118ms CPU time, 108.4M memory peak. Aug 13 00:26:40.632645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:26:40.654594 systemd[1]: Reload requested from client PID 2643 ('systemctl') (unit session-9.scope)... Aug 13 00:26:40.654606 systemd[1]: Reloading... Aug 13 00:26:40.750040 zram_generator::config[2689]: No configuration found. Aug 13 00:26:40.861542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:26:40.953252 systemd[1]: Reloading finished in 298 ms. Aug 13 00:26:40.982638 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:26:40.982704 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:26:40.982911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:40.982962 systemd[1]: kubelet.service: Consumed 66ms CPU time, 69.9M memory peak. Aug 13 00:26:40.984614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:26:41.476074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:41.484455 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:26:41.519298 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:26:41.519298 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:26:41.519298 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:26:41.519563 kubelet[2756]: I0813 00:26:41.519359 2756 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:26:41.682100 kubelet[2756]: I0813 00:26:41.682076 2756 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:26:41.682100 kubelet[2756]: I0813 00:26:41.682095 2756 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:26:41.682269 kubelet[2756]: I0813 00:26:41.682257 2756 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:26:41.709289 kubelet[2756]: E0813 00:26:41.709254 2756 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:26:41.709913 kubelet[2756]: I0813 00:26:41.709691 2756 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:26:41.714346 kubelet[2756]: I0813 00:26:41.714331 2756 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:26:41.716775 kubelet[2756]: I0813 00:26:41.716759 2756 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:26:41.717271 kubelet[2756]: I0813 00:26:41.717256 2756 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:26:41.717390 kubelet[2756]: I0813 00:26:41.717363 2756 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:26:41.717518 kubelet[2756]: I0813 00:26:41.717395 2756 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-a-4baf804050","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:26:41.717621 kubelet[2756]: I0813 00:26:41.717524 2756 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:26:41.717621 kubelet[2756]: I0813 00:26:41.717533 2756 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:26:41.717621 kubelet[2756]: I0813 00:26:41.717614 2756 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:26:41.720782 kubelet[2756]: I0813 00:26:41.720592 2756 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:26:41.720782 kubelet[2756]: I0813 00:26:41.720612 2756 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:26:41.720782 kubelet[2756]: I0813 00:26:41.720639 2756 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:26:41.720782 kubelet[2756]: I0813 00:26:41.720657 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:26:41.723683 kubelet[2756]: W0813 00:26:41.723501 2756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-a-4baf804050&limit=500&resourceVersion=0": dial tcp 10.200.8.5:6443: connect: connection refused Aug 13 00:26:41.723800 kubelet[2756]: E0813 00:26:41.723772 2756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-a-4baf804050&limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:26:41.724063 kubelet[2756]: I0813 00:26:41.723896 2756 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:26:41.724311 kubelet[2756]: I0813 00:26:41.724298 2756 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:26:41.724844 kubelet[2756]: W0813 00:26:41.724818 2756 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:26:41.726965 kubelet[2756]: I0813 00:26:41.726950 2756 server.go:1274] "Started kubelet" Aug 13 00:26:41.731606 kubelet[2756]: W0813 00:26:41.731427 2756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.5:6443: connect: connection refused Aug 13 00:26:41.731606 kubelet[2756]: E0813 00:26:41.731476 2756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:26:41.734410 kubelet[2756]: E0813 00:26:41.731531 2756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.5:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-a-4baf804050.185b2bf2d3d44f20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-a-4baf804050,UID:ci-4372.1.0-a-4baf804050,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-a-4baf804050,},FirstTimestamp:2025-08-13 00:26:41.726926624 +0000 UTC m=+0.238825825,LastTimestamp:2025-08-13 00:26:41.726926624 +0000 UTC m=+0.238825825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-a-4baf804050,}" Aug 13 00:26:41.734410 kubelet[2756]: I0813 00:26:41.733506 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:26:41.735630 kubelet[2756]: I0813 00:26:41.735518 2756 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:26:41.737130 kubelet[2756]: I0813 00:26:41.737097 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:26:41.737736 kubelet[2756]: I0813 00:26:41.737715 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:26:41.738153 kubelet[2756]: I0813 00:26:41.738141 2756 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:26:41.738265 kubelet[2756]: I0813 00:26:41.738259 2756 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:26:41.738471 kubelet[2756]: E0813 00:26:41.738461 2756 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-4baf804050\" not found" Aug 13 00:26:41.739519 kubelet[2756]: E0813 00:26:41.739484 2756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-4baf804050?timeout=10s\": dial tcp 10.200.8.5:6443: connect: connection refused" interval="200ms" Aug 13 00:26:41.739648 kubelet[2756]: E0813 00:26:41.739628 2756 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:26:41.739907 kubelet[2756]: I0813 00:26:41.739898 2756 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:26:41.739998 kubelet[2756]: I0813 00:26:41.739987 2756 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:26:41.741246 kubelet[2756]: I0813 00:26:41.741220 2756 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:26:41.741319 kubelet[2756]: I0813 00:26:41.741284 2756 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:26:41.742963 kubelet[2756]: I0813 00:26:41.742664 2756 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:26:41.743221 kubelet[2756]: W0813 00:26:41.743184 2756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.5:6443: connect: connection refused Aug 13 00:26:41.743303 kubelet[2756]: E0813 00:26:41.743290 2756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:26:41.743361 kubelet[2756]: I0813 00:26:41.743257 2756 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:26:41.754121 kubelet[2756]: I0813 00:26:41.754108 2756 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:26:41.754228 kubelet[2756]: I0813 00:26:41.754216 2756 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:26:41.754273 kubelet[2756]: I0813 00:26:41.754232 2756 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:26:41.758102 kubelet[2756]: I0813 00:26:41.758087 2756 policy_none.go:49] "None policy: Start" Aug 13 00:26:41.758567 kubelet[2756]: I0813 00:26:41.758554 2756 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:26:41.758619 kubelet[2756]: I0813 00:26:41.758571 2756 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:26:41.765444 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:26:41.776145 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:26:41.778694 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:26:41.789560 kubelet[2756]: I0813 00:26:41.789548 2756 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:26:41.790158 kubelet[2756]: I0813 00:26:41.789750 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:26:41.790158 kubelet[2756]: I0813 00:26:41.789761 2756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:26:41.790158 kubelet[2756]: I0813 00:26:41.789923 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:26:41.791222 kubelet[2756]: E0813 00:26:41.791190 2756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-a-4baf804050\" not found" Aug 13 00:26:41.821493 kubelet[2756]: I0813 00:26:41.821449 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:26:41.823245 kubelet[2756]: I0813 00:26:41.823201 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:26:41.823245 kubelet[2756]: I0813 00:26:41.823219 2756 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:26:41.823245 kubelet[2756]: I0813 00:26:41.823233 2756 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:26:41.823466 kubelet[2756]: E0813 00:26:41.823455 2756 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 00:26:41.824107 kubelet[2756]: W0813 00:26:41.824051 2756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.5:6443: connect: connection refused Aug 13 00:26:41.824107 kubelet[2756]: E0813 00:26:41.824080 2756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:26:41.890810 kubelet[2756]: I0813 00:26:41.890793 2756 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.891077 kubelet[2756]: E0813 00:26:41.891059 2756 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.5:6443/api/v1/nodes\": dial tcp 10.200.8.5:6443: connect: connection refused" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.930885 systemd[1]: Created slice kubepods-burstable-podeb5a6b42b44f57723cdd803494615856.slice - libcontainer container kubepods-burstable-podeb5a6b42b44f57723cdd803494615856.slice. Aug 13 00:26:41.940143 kubelet[2756]: E0813 00:26:41.940115 2756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-4baf804050?timeout=10s\": dial tcp 10.200.8.5:6443: connect: connection refused" interval="400ms" Aug 13 00:26:41.944427 kubelet[2756]: I0813 00:26:41.944401 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb5a6b42b44f57723cdd803494615856-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-a-4baf804050\" (UID: \"eb5a6b42b44f57723cdd803494615856\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.944467 kubelet[2756]: I0813 00:26:41.944432 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb5a6b42b44f57723cdd803494615856-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-a-4baf804050\" (UID: \"eb5a6b42b44f57723cdd803494615856\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.944467 kubelet[2756]: I0813 00:26:41.944453 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb5a6b42b44f57723cdd803494615856-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-a-4baf804050\" (UID: \"eb5a6b42b44f57723cdd803494615856\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.944511 kubelet[2756]: I0813 00:26:41.944470 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.944511 kubelet[2756]: I0813 00:26:41.944487 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.944511 kubelet[2756]: I0813 00:26:41.944503 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31bfa97c82742420d132bba9efee79e3-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-a-4baf804050\" (UID: \"31bfa97c82742420d132bba9efee79e3\") " pod="kube-system/kube-scheduler-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.944571 kubelet[2756]: I0813 00:26:41.944519 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.944571 kubelet[2756]: I0813 00:26:41.944541 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.944571 kubelet[2756]: I0813 00:26:41.944557 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:41.952416 systemd[1]: Created slice kubepods-burstable-podbeb48dbdb645b018a818230cd0017988.slice - libcontainer container kubepods-burstable-podbeb48dbdb645b018a818230cd0017988.slice. Aug 13 00:26:41.955557 systemd[1]: Created slice kubepods-burstable-pod31bfa97c82742420d132bba9efee79e3.slice - libcontainer container kubepods-burstable-pod31bfa97c82742420d132bba9efee79e3.slice. Aug 13 00:26:42.092338 kubelet[2756]: I0813 00:26:42.092319 2756 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:42.092561 kubelet[2756]: E0813 00:26:42.092540 2756 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.5:6443/api/v1/nodes\": dial tcp 10.200.8.5:6443: connect: connection refused" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:42.251433 containerd[1741]: time="2025-08-13T00:26:42.251392034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-a-4baf804050,Uid:eb5a6b42b44f57723cdd803494615856,Namespace:kube-system,Attempt:0,}" Aug 13 00:26:42.254828 containerd[1741]: time="2025-08-13T00:26:42.254804883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-a-4baf804050,Uid:beb48dbdb645b018a818230cd0017988,Namespace:kube-system,Attempt:0,}" Aug 13 00:26:42.258239 containerd[1741]: time="2025-08-13T00:26:42.258217077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-a-4baf804050,Uid:31bfa97c82742420d132bba9efee79e3,Namespace:kube-system,Attempt:0,}" Aug 13 00:26:42.307061 containerd[1741]: time="2025-08-13T00:26:42.306917707Z" level=info msg="connecting to shim 726b731cb2e0fc85d4a125960acddd954a94349b805e71e9cabb494228a9f536" address="unix:///run/containerd/s/653e69c26d9a1113d2e93a8443f62f46039cc993b8b83d355c34f5e177c3e7eb" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:26:42.328896 containerd[1741]: time="2025-08-13T00:26:42.328867647Z" level=info msg="connecting to shim 42a58e9079280d7c6ee96b35cc08eb3a58c563ab2ef989c12c019f54f17b1e20" address="unix:///run/containerd/s/1869bf244f9b59b7f9112657c4e88bf7aa8e0e868635cef668c1cf726cda651a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:26:42.341112 kubelet[2756]: E0813 00:26:42.341082 2756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-4baf804050?timeout=10s\": dial tcp 10.200.8.5:6443: connect: connection refused" interval="800ms" Aug 13 00:26:42.345462 containerd[1741]: time="2025-08-13T00:26:42.345352920Z" level=info msg="connecting to shim 58918edaaa0949f689bd7657386a9b8c8d2fc4a7bc2d6b13b24968d6c8fadcef" address="unix:///run/containerd/s/6441f74df27098033ba1141bd89a14c4f47a7fbf9013654c4ab85c9fc8d98d77" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:26:42.346167 systemd[1]: Started cri-containerd-726b731cb2e0fc85d4a125960acddd954a94349b805e71e9cabb494228a9f536.scope - libcontainer container 726b731cb2e0fc85d4a125960acddd954a94349b805e71e9cabb494228a9f536. Aug 13 00:26:42.357121 systemd[1]: Started cri-containerd-42a58e9079280d7c6ee96b35cc08eb3a58c563ab2ef989c12c019f54f17b1e20.scope - libcontainer container 42a58e9079280d7c6ee96b35cc08eb3a58c563ab2ef989c12c019f54f17b1e20. Aug 13 00:26:42.375137 systemd[1]: Started cri-containerd-58918edaaa0949f689bd7657386a9b8c8d2fc4a7bc2d6b13b24968d6c8fadcef.scope - libcontainer container 58918edaaa0949f689bd7657386a9b8c8d2fc4a7bc2d6b13b24968d6c8fadcef. Aug 13 00:26:42.439649 containerd[1741]: time="2025-08-13T00:26:42.439633641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-a-4baf804050,Uid:31bfa97c82742420d132bba9efee79e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"58918edaaa0949f689bd7657386a9b8c8d2fc4a7bc2d6b13b24968d6c8fadcef\"" Aug 13 00:26:42.442653 containerd[1741]: time="2025-08-13T00:26:42.442631593Z" level=info msg="CreateContainer within sandbox \"58918edaaa0949f689bd7657386a9b8c8d2fc4a7bc2d6b13b24968d6c8fadcef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:26:42.444416 containerd[1741]: time="2025-08-13T00:26:42.444398086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-a-4baf804050,Uid:eb5a6b42b44f57723cdd803494615856,Namespace:kube-system,Attempt:0,} returns sandbox id \"726b731cb2e0fc85d4a125960acddd954a94349b805e71e9cabb494228a9f536\"" Aug 13 00:26:42.445874 containerd[1741]: time="2025-08-13T00:26:42.445850317Z" level=info msg="CreateContainer within sandbox \"726b731cb2e0fc85d4a125960acddd954a94349b805e71e9cabb494228a9f536\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:26:42.446873 containerd[1741]: time="2025-08-13T00:26:42.446844551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-a-4baf804050,Uid:beb48dbdb645b018a818230cd0017988,Namespace:kube-system,Attempt:0,} returns sandbox id \"42a58e9079280d7c6ee96b35cc08eb3a58c563ab2ef989c12c019f54f17b1e20\"" Aug 13 00:26:42.448337 containerd[1741]: time="2025-08-13T00:26:42.448303131Z" level=info msg="CreateContainer within sandbox \"42a58e9079280d7c6ee96b35cc08eb3a58c563ab2ef989c12c019f54f17b1e20\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:26:42.464510 containerd[1741]: time="2025-08-13T00:26:42.464481757Z" level=info msg="Container 2cf7c96bd34bfc5a736b2432c477b2f438e35ffb5ec85b40514781836c856ff9: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:26:42.470345 containerd[1741]: time="2025-08-13T00:26:42.470323048Z" level=info msg="Container 9578b76bf1fd70b683f7b92028a4399d858583373f25ecaa0d97cad7598b8f9a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:26:42.473827 containerd[1741]: time="2025-08-13T00:26:42.473804024Z" level=info msg="Container dd8a5b028c3d8546e67025051f0ba0ed9d13263d0428f825bc04caf2810b0115: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:26:42.483753 containerd[1741]: time="2025-08-13T00:26:42.483726495Z" level=info msg="CreateContainer within sandbox \"58918edaaa0949f689bd7657386a9b8c8d2fc4a7bc2d6b13b24968d6c8fadcef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2cf7c96bd34bfc5a736b2432c477b2f438e35ffb5ec85b40514781836c856ff9\"" Aug 13 00:26:42.484273 containerd[1741]: time="2025-08-13T00:26:42.484252276Z" level=info msg="StartContainer for \"2cf7c96bd34bfc5a736b2432c477b2f438e35ffb5ec85b40514781836c856ff9\"" Aug 13 00:26:42.485158 containerd[1741]: time="2025-08-13T00:26:42.485121201Z" level=info msg="connecting to shim 2cf7c96bd34bfc5a736b2432c477b2f438e35ffb5ec85b40514781836c856ff9" address="unix:///run/containerd/s/6441f74df27098033ba1141bd89a14c4f47a7fbf9013654c4ab85c9fc8d98d77" protocol=ttrpc version=3 Aug 13 00:26:42.495399 kubelet[2756]: I0813 00:26:42.495364 2756 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:42.495672 kubelet[2756]: E0813 00:26:42.495630 2756 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.5:6443/api/v1/nodes\": dial tcp 10.200.8.5:6443: connect: connection refused" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:42.496334 containerd[1741]: time="2025-08-13T00:26:42.496285779Z" level=info msg="CreateContainer within sandbox \"726b731cb2e0fc85d4a125960acddd954a94349b805e71e9cabb494228a9f536\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9578b76bf1fd70b683f7b92028a4399d858583373f25ecaa0d97cad7598b8f9a\"" Aug 13 00:26:42.497294 containerd[1741]: time="2025-08-13T00:26:42.497177052Z" level=info msg="CreateContainer within sandbox \"42a58e9079280d7c6ee96b35cc08eb3a58c563ab2ef989c12c019f54f17b1e20\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dd8a5b028c3d8546e67025051f0ba0ed9d13263d0428f825bc04caf2810b0115\"" Aug 13 00:26:42.497670 containerd[1741]: time="2025-08-13T00:26:42.497644725Z" level=info msg="StartContainer for \"9578b76bf1fd70b683f7b92028a4399d858583373f25ecaa0d97cad7598b8f9a\"" Aug 13 00:26:42.498134 containerd[1741]: time="2025-08-13T00:26:42.498111170Z" level=info msg="StartContainer for \"dd8a5b028c3d8546e67025051f0ba0ed9d13263d0428f825bc04caf2810b0115\"" Aug 13 00:26:42.498824 containerd[1741]: time="2025-08-13T00:26:42.498795919Z" level=info msg="connecting to shim dd8a5b028c3d8546e67025051f0ba0ed9d13263d0428f825bc04caf2810b0115" address="unix:///run/containerd/s/1869bf244f9b59b7f9112657c4e88bf7aa8e0e868635cef668c1cf726cda651a" protocol=ttrpc version=3 Aug 13 00:26:42.499795 containerd[1741]: time="2025-08-13T00:26:42.499756420Z" level=info msg="connecting to shim 9578b76bf1fd70b683f7b92028a4399d858583373f25ecaa0d97cad7598b8f9a" address="unix:///run/containerd/s/653e69c26d9a1113d2e93a8443f62f46039cc993b8b83d355c34f5e177c3e7eb" protocol=ttrpc version=3 Aug 13 00:26:42.507221 systemd[1]: Started cri-containerd-2cf7c96bd34bfc5a736b2432c477b2f438e35ffb5ec85b40514781836c856ff9.scope - libcontainer container 2cf7c96bd34bfc5a736b2432c477b2f438e35ffb5ec85b40514781836c856ff9. Aug 13 00:26:42.527144 systemd[1]: Started cri-containerd-dd8a5b028c3d8546e67025051f0ba0ed9d13263d0428f825bc04caf2810b0115.scope - libcontainer container dd8a5b028c3d8546e67025051f0ba0ed9d13263d0428f825bc04caf2810b0115. Aug 13 00:26:42.530728 systemd[1]: Started cri-containerd-9578b76bf1fd70b683f7b92028a4399d858583373f25ecaa0d97cad7598b8f9a.scope - libcontainer container 9578b76bf1fd70b683f7b92028a4399d858583373f25ecaa0d97cad7598b8f9a. Aug 13 00:26:42.567069 kubelet[2756]: W0813 00:26:42.567004 2756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.5:6443: connect: connection refused Aug 13 00:26:42.567314 kubelet[2756]: E0813 00:26:42.567082 2756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:26:42.606333 containerd[1741]: time="2025-08-13T00:26:42.606089519Z" level=info msg="StartContainer for \"dd8a5b028c3d8546e67025051f0ba0ed9d13263d0428f825bc04caf2810b0115\" returns successfully" Aug 13 00:26:42.609842 containerd[1741]: time="2025-08-13T00:26:42.609662852Z" level=info msg="StartContainer for \"9578b76bf1fd70b683f7b92028a4399d858583373f25ecaa0d97cad7598b8f9a\" returns successfully" Aug 13 00:26:42.610506 containerd[1741]: time="2025-08-13T00:26:42.609773282Z" level=info msg="StartContainer for \"2cf7c96bd34bfc5a736b2432c477b2f438e35ffb5ec85b40514781836c856ff9\" returns successfully" Aug 13 00:26:43.298497 kubelet[2756]: I0813 00:26:43.298463 2756 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:44.191050 kubelet[2756]: E0813 00:26:44.190812 2756 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-a-4baf804050\" not found" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:44.214194 kubelet[2756]: I0813 00:26:44.214154 2756 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:44.214476 kubelet[2756]: E0813 00:26:44.214331 2756 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4372.1.0-a-4baf804050\": node \"ci-4372.1.0-a-4baf804050\" not found" Aug 13 00:26:44.728318 kubelet[2756]: I0813 00:26:44.728289 2756 apiserver.go:52] "Watching apiserver" Aug 13 00:26:44.740554 kubelet[2756]: I0813 00:26:44.740534 2756 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:26:44.841478 kubelet[2756]: E0813 00:26:44.841296 2756 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.1.0-a-4baf804050\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" Aug 13 00:26:46.437250 systemd[1]: Reload requested from client PID 3025 ('systemctl') (unit session-9.scope)... Aug 13 00:26:46.437264 systemd[1]: Reloading... Aug 13 00:26:46.515098 zram_generator::config[3070]: No configuration found. Aug 13 00:26:46.598584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:26:46.700319 systemd[1]: Reloading finished in 262 ms. Aug 13 00:26:46.717480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:26:46.739764 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:26:46.739960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:46.740000 systemd[1]: kubelet.service: Consumed 515ms CPU time, 130.3M memory peak. Aug 13 00:26:46.741716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:26:47.256178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:26:47.262332 (kubelet)[3138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:26:47.306636 kubelet[3138]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:26:47.306636 kubelet[3138]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:26:47.306636 kubelet[3138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:26:47.306899 kubelet[3138]: I0813 00:26:47.306693 3138 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:26:47.315150 kubelet[3138]: I0813 00:26:47.315125 3138 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:26:47.315150 kubelet[3138]: I0813 00:26:47.315145 3138 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:26:47.315363 kubelet[3138]: I0813 00:26:47.315349 3138 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:26:47.316298 kubelet[3138]: I0813 00:26:47.316280 3138 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:26:47.317807 kubelet[3138]: I0813 00:26:47.317689 3138 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:26:47.322956 kubelet[3138]: I0813 00:26:47.322940 3138 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:26:47.325929 kubelet[3138]: I0813 00:26:47.325909 3138 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:26:47.326045 kubelet[3138]: I0813 00:26:47.326034 3138 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:26:47.326140 kubelet[3138]: I0813 00:26:47.326121 3138 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:26:47.326423 kubelet[3138]: I0813 00:26:47.326141 3138 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-a-4baf804050","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:26:47.326520 kubelet[3138]: I0813 00:26:47.326433 3138 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:26:47.326520 kubelet[3138]: I0813 00:26:47.326445 3138 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:26:47.326520 kubelet[3138]: I0813 00:26:47.326474 3138 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:26:47.326585 kubelet[3138]: I0813 00:26:47.326561 3138 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:26:47.326585 kubelet[3138]: I0813 00:26:47.326572 3138 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:26:47.326622 kubelet[3138]: I0813 00:26:47.326598 3138 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:26:47.326622 kubelet[3138]: I0813 00:26:47.326607 3138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:26:47.335038 kubelet[3138]: I0813 00:26:47.334523 3138 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:26:47.335038 kubelet[3138]: I0813 00:26:47.334834 3138 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:26:47.335327 kubelet[3138]: I0813 00:26:47.335310 3138 server.go:1274] "Started kubelet" Aug 13 00:26:47.337070 kubelet[3138]: I0813 00:26:47.337058 3138 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:26:47.338476 kubelet[3138]: I0813 00:26:47.338462 3138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:26:47.338554 kubelet[3138]: I0813 00:26:47.338527 3138 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:26:47.339313 kubelet[3138]: I0813 00:26:47.339299 3138 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:26:47.342110 kubelet[3138]: I0813 00:26:47.342079 3138 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:26:47.342257 kubelet[3138]: I0813 00:26:47.342243 3138 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:26:47.345863 kubelet[3138]: I0813 00:26:47.345826 3138 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:26:47.347986 kubelet[3138]: I0813 00:26:47.347972 3138 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:26:47.348163 kubelet[3138]: I0813 00:26:47.348149 3138 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:26:47.348245 kubelet[3138]: I0813 00:26:47.348233 3138 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:26:47.348291 kubelet[3138]: I0813 00:26:47.348227 3138 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:26:47.348990 kubelet[3138]: E0813 00:26:47.348960 3138 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:26:47.351716 kubelet[3138]: I0813 00:26:47.351608 3138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:26:47.352428 kubelet[3138]: I0813 00:26:47.352414 3138 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:26:47.353924 kubelet[3138]: I0813 00:26:47.353909 3138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:26:47.354036 kubelet[3138]: I0813 00:26:47.353999 3138 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:26:47.354289 kubelet[3138]: I0813 00:26:47.354077 3138 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:26:47.354289 kubelet[3138]: E0813 00:26:47.354106 3138 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:26:47.394349 kubelet[3138]: I0813 00:26:47.394331 3138 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:26:47.394349 kubelet[3138]: I0813 00:26:47.394343 3138 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:26:47.394449 kubelet[3138]: I0813 00:26:47.394359 3138 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:26:47.394500 kubelet[3138]: I0813 00:26:47.394485 3138 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:26:47.394528 kubelet[3138]: I0813 00:26:47.394498 3138 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:26:47.394528 kubelet[3138]: I0813 00:26:47.394514 3138 policy_none.go:49] "None policy: Start" Aug 13 00:26:47.394989 kubelet[3138]: I0813 00:26:47.394976 3138 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:26:47.395127 kubelet[3138]: I0813 00:26:47.394994 3138 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:26:47.395171 kubelet[3138]: I0813 00:26:47.395145 3138 state_mem.go:75] "Updated machine memory state" Aug 13 00:26:47.398257 kubelet[3138]: I0813 00:26:47.398241 3138 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:26:47.398393 kubelet[3138]: I0813 00:26:47.398382 3138 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:26:47.398428 kubelet[3138]: I0813 00:26:47.398396 3138 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:26:47.398895 kubelet[3138]: I0813 00:26:47.398631 3138 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:26:47.426845 sudo[3169]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:26:47.427152 sudo[3169]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:26:47.469570 kubelet[3138]: W0813 00:26:47.469549 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:26:47.476160 kubelet[3138]: W0813 00:26:47.475947 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:26:47.476322 kubelet[3138]: W0813 00:26:47.476308 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:26:47.507078 kubelet[3138]: I0813 00:26:47.505960 3138 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.524528 kubelet[3138]: I0813 00:26:47.524511 3138 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.524604 kubelet[3138]: I0813 00:26:47.524581 3138 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549220 kubelet[3138]: I0813 00:26:47.548996 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549220 kubelet[3138]: I0813 00:26:47.549042 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549220 kubelet[3138]: I0813 00:26:47.549059 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31bfa97c82742420d132bba9efee79e3-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-a-4baf804050\" (UID: \"31bfa97c82742420d132bba9efee79e3\") " pod="kube-system/kube-scheduler-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549220 kubelet[3138]: I0813 00:26:47.549073 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb5a6b42b44f57723cdd803494615856-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-a-4baf804050\" (UID: \"eb5a6b42b44f57723cdd803494615856\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549220 kubelet[3138]: I0813 00:26:47.549086 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb5a6b42b44f57723cdd803494615856-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-a-4baf804050\" (UID: \"eb5a6b42b44f57723cdd803494615856\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549358 kubelet[3138]: I0813 00:26:47.549142 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549358 kubelet[3138]: I0813 00:26:47.549166 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549358 kubelet[3138]: I0813 00:26:47.549223 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb5a6b42b44f57723cdd803494615856-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-a-4baf804050\" (UID: \"eb5a6b42b44f57723cdd803494615856\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.549358 kubelet[3138]: I0813 00:26:47.549237 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/beb48dbdb645b018a818230cd0017988-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-4baf804050\" (UID: \"beb48dbdb645b018a818230cd0017988\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" Aug 13 00:26:47.954637 sudo[3169]: pam_unix(sudo:session): session closed for user root Aug 13 00:26:48.330298 kubelet[3138]: I0813 00:26:48.330270 3138 apiserver.go:52] "Watching apiserver" Aug 13 00:26:48.348269 kubelet[3138]: I0813 00:26:48.348238 3138 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:26:48.392744 kubelet[3138]: W0813 00:26:48.392719 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:26:48.392924 kubelet[3138]: E0813 00:26:48.392770 3138 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.1.0-a-4baf804050\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" Aug 13 00:26:48.409519 kubelet[3138]: I0813 00:26:48.409345 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-a-4baf804050" podStartSLOduration=1.409330091 podStartE2EDuration="1.409330091s" podCreationTimestamp="2025-08-13 00:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:26:48.409238232 +0000 UTC m=+1.143542046" watchObservedRunningTime="2025-08-13 00:26:48.409330091 +0000 UTC m=+1.143633898" Aug 13 00:26:48.409519 kubelet[3138]: I0813 00:26:48.409451 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-a-4baf804050" podStartSLOduration=1.409444521 podStartE2EDuration="1.409444521s" podCreationTimestamp="2025-08-13 00:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:26:48.401784572 +0000 UTC m=+1.136088406" watchObservedRunningTime="2025-08-13 00:26:48.409444521 +0000 UTC m=+1.143748333" Aug 13 00:26:48.427699 kubelet[3138]: I0813 00:26:48.427662 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-a-4baf804050" podStartSLOduration=1.4276300370000001 podStartE2EDuration="1.427630037s" podCreationTimestamp="2025-08-13 00:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:26:48.417652817 +0000 UTC m=+1.151956657" watchObservedRunningTime="2025-08-13 00:26:48.427630037 +0000 UTC m=+1.161933837" Aug 13 00:26:49.121003 sudo[2194]: pam_unix(sudo:session): session closed for user root Aug 13 00:26:49.224689 sshd[2168]: Connection closed by 10.200.16.10 port 47530 Aug 13 00:26:49.225145 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:49.228164 systemd[1]: sshd@6-10.200.8.5:22-10.200.16.10:47530.service: Deactivated successfully. Aug 13 00:26:49.230167 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:26:49.230327 systemd[1]: session-9.scope: Consumed 3.037s CPU time, 267M memory peak. Aug 13 00:26:49.231594 systemd-logind[1696]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:26:49.233259 systemd-logind[1696]: Removed session 9. Aug 13 00:26:52.127160 kubelet[3138]: I0813 00:26:52.127128 3138 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:26:52.127634 kubelet[3138]: I0813 00:26:52.127555 3138 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:26:52.127672 containerd[1741]: time="2025-08-13T00:26:52.127384169Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:26:53.056136 systemd[1]: Created slice kubepods-besteffort-pod3fdbb2df_aaba_4f24_bc0e_2324850f6077.slice - libcontainer container kubepods-besteffort-pod3fdbb2df_aaba_4f24_bc0e_2324850f6077.slice. Aug 13 00:26:53.071057 systemd[1]: Created slice kubepods-burstable-pod10ce67f0_3906_4fa3_9791_529f46ef09da.slice - libcontainer container kubepods-burstable-pod10ce67f0_3906_4fa3_9791_529f46ef09da.slice. Aug 13 00:26:53.084310 kubelet[3138]: I0813 00:26:53.084278 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10ce67f0-3906-4fa3-9791-529f46ef09da-clustermesh-secrets\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084412 kubelet[3138]: I0813 00:26:53.084318 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-host-proc-sys-kernel\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084412 kubelet[3138]: I0813 00:26:53.084337 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-cgroup\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084412 kubelet[3138]: I0813 00:26:53.084353 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-config-path\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084412 kubelet[3138]: I0813 00:26:53.084372 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vltm2\" (UniqueName: \"kubernetes.io/projected/10ce67f0-3906-4fa3-9791-529f46ef09da-kube-api-access-vltm2\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084412 kubelet[3138]: I0813 00:26:53.084389 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-host-proc-sys-net\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084530 kubelet[3138]: I0813 00:26:53.084405 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10ce67f0-3906-4fa3-9791-529f46ef09da-hubble-tls\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084530 kubelet[3138]: I0813 00:26:53.084422 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fdbb2df-aaba-4f24-bc0e-2324850f6077-xtables-lock\") pod \"kube-proxy-8q5r8\" (UID: \"3fdbb2df-aaba-4f24-bc0e-2324850f6077\") " pod="kube-system/kube-proxy-8q5r8" Aug 13 00:26:53.084530 kubelet[3138]: I0813 00:26:53.084441 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fdbb2df-aaba-4f24-bc0e-2324850f6077-lib-modules\") pod \"kube-proxy-8q5r8\" (UID: \"3fdbb2df-aaba-4f24-bc0e-2324850f6077\") " pod="kube-system/kube-proxy-8q5r8" Aug 13 00:26:53.084530 kubelet[3138]: I0813 00:26:53.084458 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cni-path\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084530 kubelet[3138]: I0813 00:26:53.084476 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3fdbb2df-aaba-4f24-bc0e-2324850f6077-kube-proxy\") pod \"kube-proxy-8q5r8\" (UID: \"3fdbb2df-aaba-4f24-bc0e-2324850f6077\") " pod="kube-system/kube-proxy-8q5r8" Aug 13 00:26:53.084530 kubelet[3138]: I0813 00:26:53.084492 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-run\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084657 kubelet[3138]: I0813 00:26:53.084508 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-etc-cni-netd\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084657 kubelet[3138]: I0813 00:26:53.084525 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-xtables-lock\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084657 kubelet[3138]: I0813 00:26:53.084541 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-lib-modules\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084657 kubelet[3138]: I0813 00:26:53.084557 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-bpf-maps\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084657 kubelet[3138]: I0813 00:26:53.084578 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-hostproc\") pod \"cilium-tpxqk\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " pod="kube-system/cilium-tpxqk" Aug 13 00:26:53.084657 kubelet[3138]: I0813 00:26:53.084597 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fh7z\" (UniqueName: \"kubernetes.io/projected/3fdbb2df-aaba-4f24-bc0e-2324850f6077-kube-api-access-6fh7z\") pod \"kube-proxy-8q5r8\" (UID: \"3fdbb2df-aaba-4f24-bc0e-2324850f6077\") " pod="kube-system/kube-proxy-8q5r8" Aug 13 00:26:53.165460 systemd[1]: Created slice kubepods-besteffort-pod9039f2c4_9a0e_4479_abae_4ffb398676aa.slice - libcontainer container kubepods-besteffort-pod9039f2c4_9a0e_4479_abae_4ffb398676aa.slice. Aug 13 00:26:53.185870 kubelet[3138]: I0813 00:26:53.184804 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhns\" (UniqueName: \"kubernetes.io/projected/9039f2c4-9a0e-4479-abae-4ffb398676aa-kube-api-access-nfhns\") pod \"cilium-operator-5d85765b45-5bq78\" (UID: \"9039f2c4-9a0e-4479-abae-4ffb398676aa\") " pod="kube-system/cilium-operator-5d85765b45-5bq78" Aug 13 00:26:53.185870 kubelet[3138]: I0813 00:26:53.184890 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9039f2c4-9a0e-4479-abae-4ffb398676aa-cilium-config-path\") pod \"cilium-operator-5d85765b45-5bq78\" (UID: \"9039f2c4-9a0e-4479-abae-4ffb398676aa\") " pod="kube-system/cilium-operator-5d85765b45-5bq78" Aug 13 00:26:53.367513 containerd[1741]: time="2025-08-13T00:26:53.367470276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8q5r8,Uid:3fdbb2df-aaba-4f24-bc0e-2324850f6077,Namespace:kube-system,Attempt:0,}" Aug 13 00:26:53.374967 containerd[1741]: time="2025-08-13T00:26:53.374942386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpxqk,Uid:10ce67f0-3906-4fa3-9791-529f46ef09da,Namespace:kube-system,Attempt:0,}" Aug 13 00:26:53.449702 containerd[1741]: time="2025-08-13T00:26:53.449666933Z" level=info msg="connecting to shim 6e4e5308f61e1b1db5173e5d3627b80b02553a0ee77c1a7fa8536c1f24293ae1" address="unix:///run/containerd/s/e215b392ab94869629515b4cd12307ca81819efcefefa9c4e7bfaaf46513bc25" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:26:53.450969 containerd[1741]: time="2025-08-13T00:26:53.450701305Z" level=info msg="connecting to shim 4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692" address="unix:///run/containerd/s/cac7b456a9b7268310d3df54e275fca3fe9dd7b96a60abb20fa0b096ccf5fb4d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:26:53.469238 systemd[1]: Started cri-containerd-6e4e5308f61e1b1db5173e5d3627b80b02553a0ee77c1a7fa8536c1f24293ae1.scope - libcontainer container 6e4e5308f61e1b1db5173e5d3627b80b02553a0ee77c1a7fa8536c1f24293ae1. Aug 13 00:26:53.472753 systemd[1]: Started cri-containerd-4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692.scope - libcontainer container 4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692. Aug 13 00:26:53.475484 containerd[1741]: time="2025-08-13T00:26:53.475463475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5bq78,Uid:9039f2c4-9a0e-4479-abae-4ffb398676aa,Namespace:kube-system,Attempt:0,}" Aug 13 00:26:53.503055 containerd[1741]: time="2025-08-13T00:26:53.502983050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpxqk,Uid:10ce67f0-3906-4fa3-9791-529f46ef09da,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\"" Aug 13 00:26:53.506549 containerd[1741]: time="2025-08-13T00:26:53.506517822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8q5r8,Uid:3fdbb2df-aaba-4f24-bc0e-2324850f6077,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e4e5308f61e1b1db5173e5d3627b80b02553a0ee77c1a7fa8536c1f24293ae1\"" Aug 13 00:26:53.508970 containerd[1741]: time="2025-08-13T00:26:53.508914590Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:26:53.509589 containerd[1741]: time="2025-08-13T00:26:53.509559114Z" level=info msg="connecting to shim d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451" address="unix:///run/containerd/s/8800aae1d8dbc86318f442f40abb31cc30f297f3d802e3c538f3460d521bcc12" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:26:53.512644 containerd[1741]: time="2025-08-13T00:26:53.512597163Z" level=info msg="CreateContainer within sandbox \"6e4e5308f61e1b1db5173e5d3627b80b02553a0ee77c1a7fa8536c1f24293ae1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:26:53.532312 containerd[1741]: time="2025-08-13T00:26:53.532287737Z" level=info msg="Container 5ae7432bbaaf83ebceab976fe04b14805777eff57cab6450c01341587ca6941b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:26:53.533229 systemd[1]: Started cri-containerd-d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451.scope - libcontainer container d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451. Aug 13 00:26:53.547406 containerd[1741]: time="2025-08-13T00:26:53.547366498Z" level=info msg="CreateContainer within sandbox \"6e4e5308f61e1b1db5173e5d3627b80b02553a0ee77c1a7fa8536c1f24293ae1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ae7432bbaaf83ebceab976fe04b14805777eff57cab6450c01341587ca6941b\"" Aug 13 00:26:53.547895 containerd[1741]: time="2025-08-13T00:26:53.547850662Z" level=info msg="StartContainer for \"5ae7432bbaaf83ebceab976fe04b14805777eff57cab6450c01341587ca6941b\"" Aug 13 00:26:53.549722 containerd[1741]: time="2025-08-13T00:26:53.549678807Z" level=info msg="connecting to shim 5ae7432bbaaf83ebceab976fe04b14805777eff57cab6450c01341587ca6941b" address="unix:///run/containerd/s/e215b392ab94869629515b4cd12307ca81819efcefefa9c4e7bfaaf46513bc25" protocol=ttrpc version=3 Aug 13 00:26:53.568314 systemd[1]: Started cri-containerd-5ae7432bbaaf83ebceab976fe04b14805777eff57cab6450c01341587ca6941b.scope - libcontainer container 5ae7432bbaaf83ebceab976fe04b14805777eff57cab6450c01341587ca6941b. Aug 13 00:26:53.581390 containerd[1741]: time="2025-08-13T00:26:53.581363153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5bq78,Uid:9039f2c4-9a0e-4479-abae-4ffb398676aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\"" Aug 13 00:26:53.600979 containerd[1741]: time="2025-08-13T00:26:53.600947162Z" level=info msg="StartContainer for \"5ae7432bbaaf83ebceab976fe04b14805777eff57cab6450c01341587ca6941b\" returns successfully" Aug 13 00:26:54.407875 kubelet[3138]: I0813 00:26:54.407574 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8q5r8" podStartSLOduration=1.407555943 podStartE2EDuration="1.407555943s" podCreationTimestamp="2025-08-13 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:26:54.407484528 +0000 UTC m=+7.141788345" watchObservedRunningTime="2025-08-13 00:26:54.407555943 +0000 UTC m=+7.141859761" Aug 13 00:26:57.035077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883028343.mount: Deactivated successfully. Aug 13 00:26:58.563342 containerd[1741]: time="2025-08-13T00:26:58.563262494Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:58.565178 containerd[1741]: time="2025-08-13T00:26:58.565142760Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:26:58.567394 containerd[1741]: time="2025-08-13T00:26:58.567360398Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:26:58.568263 containerd[1741]: time="2025-08-13T00:26:58.568236633Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.059287945s" Aug 13 00:26:58.568493 containerd[1741]: time="2025-08-13T00:26:58.568270288Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:26:58.569978 containerd[1741]: time="2025-08-13T00:26:58.569354854Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:26:58.570747 containerd[1741]: time="2025-08-13T00:26:58.570715776Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:26:58.589782 containerd[1741]: time="2025-08-13T00:26:58.588537395Z" level=info msg="Container caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:26:58.592226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508632615.mount: Deactivated successfully. Aug 13 00:26:58.598869 containerd[1741]: time="2025-08-13T00:26:58.598847304Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\"" Aug 13 00:26:58.599952 containerd[1741]: time="2025-08-13T00:26:58.599247039Z" level=info msg="StartContainer for \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\"" Aug 13 00:26:58.599952 containerd[1741]: time="2025-08-13T00:26:58.599846196Z" level=info msg="connecting to shim caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f" address="unix:///run/containerd/s/cac7b456a9b7268310d3df54e275fca3fe9dd7b96a60abb20fa0b096ccf5fb4d" protocol=ttrpc version=3 Aug 13 00:26:58.619177 systemd[1]: Started cri-containerd-caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f.scope - libcontainer container caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f. Aug 13 00:26:58.643103 containerd[1741]: time="2025-08-13T00:26:58.643073952Z" level=info msg="StartContainer for \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\" returns successfully" Aug 13 00:26:58.652064 systemd[1]: cri-containerd-caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f.scope: Deactivated successfully. Aug 13 00:26:58.652995 containerd[1741]: time="2025-08-13T00:26:58.652160104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\" id:\"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\" pid:3549 exited_at:{seconds:1755044818 nanos:651805283}" Aug 13 00:26:58.652995 containerd[1741]: time="2025-08-13T00:26:58.652248158Z" level=info msg="received exit event container_id:\"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\" id:\"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\" pid:3549 exited_at:{seconds:1755044818 nanos:651805283}" Aug 13 00:26:58.667739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f-rootfs.mount: Deactivated successfully. Aug 13 00:27:03.254170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275191322.mount: Deactivated successfully. Aug 13 00:27:03.426256 containerd[1741]: time="2025-08-13T00:27:03.421657616Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:27:03.451037 containerd[1741]: time="2025-08-13T00:27:03.450876781Z" level=info msg="Container c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:27:03.452508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3659412860.mount: Deactivated successfully. Aug 13 00:27:03.455794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272679250.mount: Deactivated successfully. Aug 13 00:27:03.464416 containerd[1741]: time="2025-08-13T00:27:03.464386109Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\"" Aug 13 00:27:03.465379 containerd[1741]: time="2025-08-13T00:27:03.465355192Z" level=info msg="StartContainer for \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\"" Aug 13 00:27:03.466686 containerd[1741]: time="2025-08-13T00:27:03.466666548Z" level=info msg="connecting to shim c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9" address="unix:///run/containerd/s/cac7b456a9b7268310d3df54e275fca3fe9dd7b96a60abb20fa0b096ccf5fb4d" protocol=ttrpc version=3 Aug 13 00:27:03.489463 systemd[1]: Started cri-containerd-c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9.scope - libcontainer container c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9. Aug 13 00:27:03.520331 containerd[1741]: time="2025-08-13T00:27:03.520261437Z" level=info msg="StartContainer for \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\" returns successfully" Aug 13 00:27:03.534928 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:27:03.535727 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:27:03.536217 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:27:03.539541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:27:03.541391 systemd[1]: cri-containerd-c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9.scope: Deactivated successfully. Aug 13 00:27:03.544672 containerd[1741]: time="2025-08-13T00:27:03.543480922Z" level=info msg="received exit event container_id:\"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\" id:\"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\" pid:3606 exited_at:{seconds:1755044823 nanos:543326808}" Aug 13 00:27:03.544672 containerd[1741]: time="2025-08-13T00:27:03.543800446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\" id:\"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\" pid:3606 exited_at:{seconds:1755044823 nanos:543326808}" Aug 13 00:27:03.565235 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:27:03.956873 containerd[1741]: time="2025-08-13T00:27:03.956834865Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:27:03.958811 containerd[1741]: time="2025-08-13T00:27:03.958777021Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:27:03.961922 containerd[1741]: time="2025-08-13T00:27:03.961884022Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:27:03.962875 containerd[1741]: time="2025-08-13T00:27:03.962778730Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.392671733s" Aug 13 00:27:03.962875 containerd[1741]: time="2025-08-13T00:27:03.962809583Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:27:03.964740 containerd[1741]: time="2025-08-13T00:27:03.964446666Z" level=info msg="CreateContainer within sandbox \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:27:03.978621 containerd[1741]: time="2025-08-13T00:27:03.978596494Z" level=info msg="Container 2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:27:03.990823 containerd[1741]: time="2025-08-13T00:27:03.990794722Z" level=info msg="CreateContainer within sandbox \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\"" Aug 13 00:27:03.992065 containerd[1741]: time="2025-08-13T00:27:03.991214674Z" level=info msg="StartContainer for \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\"" Aug 13 00:27:03.992065 containerd[1741]: time="2025-08-13T00:27:03.991943178Z" level=info msg="connecting to shim 2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9" address="unix:///run/containerd/s/8800aae1d8dbc86318f442f40abb31cc30f297f3d802e3c538f3460d521bcc12" protocol=ttrpc version=3 Aug 13 00:27:04.009190 systemd[1]: Started cri-containerd-2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9.scope - libcontainer container 2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9. Aug 13 00:27:04.033897 containerd[1741]: time="2025-08-13T00:27:04.033835944Z" level=info msg="StartContainer for \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" returns successfully" Aug 13 00:27:04.425201 containerd[1741]: time="2025-08-13T00:27:04.425167306Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:27:04.449706 containerd[1741]: time="2025-08-13T00:27:04.449678676Z" level=info msg="Container 1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:27:04.452809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913041080.mount: Deactivated successfully. Aug 13 00:27:04.467782 kubelet[3138]: I0813 00:27:04.467684 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5bq78" podStartSLOduration=1.086270225 podStartE2EDuration="11.467664926s" podCreationTimestamp="2025-08-13 00:26:53 +0000 UTC" firstStartedPulling="2025-08-13 00:26:53.581985535 +0000 UTC m=+6.316289333" lastFinishedPulling="2025-08-13 00:27:03.963380223 +0000 UTC m=+16.697684034" observedRunningTime="2025-08-13 00:27:04.436913104 +0000 UTC m=+17.171216923" watchObservedRunningTime="2025-08-13 00:27:04.467664926 +0000 UTC m=+17.201968740" Aug 13 00:27:04.473077 containerd[1741]: time="2025-08-13T00:27:04.473046780Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\"" Aug 13 00:27:04.474413 containerd[1741]: time="2025-08-13T00:27:04.474387357Z" level=info msg="StartContainer for \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\"" Aug 13 00:27:04.479427 containerd[1741]: time="2025-08-13T00:27:04.479362312Z" level=info msg="connecting to shim 1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f" address="unix:///run/containerd/s/cac7b456a9b7268310d3df54e275fca3fe9dd7b96a60abb20fa0b096ccf5fb4d" protocol=ttrpc version=3 Aug 13 00:27:04.507157 systemd[1]: Started cri-containerd-1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f.scope - libcontainer container 1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f. Aug 13 00:27:04.559605 containerd[1741]: time="2025-08-13T00:27:04.559548498Z" level=info msg="StartContainer for \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\" returns successfully" Aug 13 00:27:04.566549 systemd[1]: cri-containerd-1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f.scope: Deactivated successfully. Aug 13 00:27:04.567706 containerd[1741]: time="2025-08-13T00:27:04.567682845Z" level=info msg="received exit event container_id:\"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\" id:\"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\" pid:3690 exited_at:{seconds:1755044824 nanos:567491596}" Aug 13 00:27:04.568437 containerd[1741]: time="2025-08-13T00:27:04.568410508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\" id:\"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\" pid:3690 exited_at:{seconds:1755044824 nanos:567491596}" Aug 13 00:27:04.601306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f-rootfs.mount: Deactivated successfully. Aug 13 00:27:05.428629 containerd[1741]: time="2025-08-13T00:27:05.428528300Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:27:05.455249 containerd[1741]: time="2025-08-13T00:27:05.452909069Z" level=info msg="Container b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:27:05.456366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157576507.mount: Deactivated successfully. Aug 13 00:27:05.467657 containerd[1741]: time="2025-08-13T00:27:05.467629873Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\"" Aug 13 00:27:05.468112 containerd[1741]: time="2025-08-13T00:27:05.468033626Z" level=info msg="StartContainer for \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\"" Aug 13 00:27:05.469165 containerd[1741]: time="2025-08-13T00:27:05.469128626Z" level=info msg="connecting to shim b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478" address="unix:///run/containerd/s/cac7b456a9b7268310d3df54e275fca3fe9dd7b96a60abb20fa0b096ccf5fb4d" protocol=ttrpc version=3 Aug 13 00:27:05.497148 systemd[1]: Started cri-containerd-b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478.scope - libcontainer container b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478. Aug 13 00:27:05.515634 systemd[1]: cri-containerd-b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478.scope: Deactivated successfully. Aug 13 00:27:05.517680 containerd[1741]: time="2025-08-13T00:27:05.517653436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\" id:\"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\" pid:3732 exited_at:{seconds:1755044825 nanos:517400135}" Aug 13 00:27:05.520474 containerd[1741]: time="2025-08-13T00:27:05.520368208Z" level=info msg="received exit event container_id:\"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\" id:\"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\" pid:3732 exited_at:{seconds:1755044825 nanos:517400135}" Aug 13 00:27:05.525818 containerd[1741]: time="2025-08-13T00:27:05.525788864Z" level=info msg="StartContainer for \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\" returns successfully" Aug 13 00:27:05.535966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478-rootfs.mount: Deactivated successfully. Aug 13 00:27:06.432400 containerd[1741]: time="2025-08-13T00:27:06.432356576Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:27:06.451919 containerd[1741]: time="2025-08-13T00:27:06.451167601Z" level=info msg="Container b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:27:06.465360 containerd[1741]: time="2025-08-13T00:27:06.465332139Z" level=info msg="CreateContainer within sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\"" Aug 13 00:27:06.465927 containerd[1741]: time="2025-08-13T00:27:06.465879643Z" level=info msg="StartContainer for \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\"" Aug 13 00:27:06.466910 containerd[1741]: time="2025-08-13T00:27:06.466885833Z" level=info msg="connecting to shim b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28" address="unix:///run/containerd/s/cac7b456a9b7268310d3df54e275fca3fe9dd7b96a60abb20fa0b096ccf5fb4d" protocol=ttrpc version=3 Aug 13 00:27:06.490158 systemd[1]: Started cri-containerd-b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28.scope - libcontainer container b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28. Aug 13 00:27:06.518690 containerd[1741]: time="2025-08-13T00:27:06.518605791Z" level=info msg="StartContainer for \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" returns successfully" Aug 13 00:27:06.578496 containerd[1741]: time="2025-08-13T00:27:06.578475287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" id:\"976ccafb3b6b3f1dc5f9b049ea24f99d1dee48b217935c1363fbac322604179d\" pid:3800 exited_at:{seconds:1755044826 nanos:578254854}" Aug 13 00:27:06.661925 kubelet[3138]: I0813 00:27:06.661888 3138 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:27:06.702390 systemd[1]: Created slice kubepods-burstable-pod07ca9dc7_1003_4ff6_a82e_fb4a9f439adf.slice - libcontainer container kubepods-burstable-pod07ca9dc7_1003_4ff6_a82e_fb4a9f439adf.slice. Aug 13 00:27:06.713344 systemd[1]: Created slice kubepods-burstable-podaa31547e_de3b_4b87_924f_0f8c838fc98c.slice - libcontainer container kubepods-burstable-podaa31547e_de3b_4b87_924f_0f8c838fc98c.slice. Aug 13 00:27:06.770091 kubelet[3138]: I0813 00:27:06.768994 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07ca9dc7-1003-4ff6-a82e-fb4a9f439adf-config-volume\") pod \"coredns-7c65d6cfc9-gbgt2\" (UID: \"07ca9dc7-1003-4ff6-a82e-fb4a9f439adf\") " pod="kube-system/coredns-7c65d6cfc9-gbgt2" Aug 13 00:27:06.770091 kubelet[3138]: I0813 00:27:06.770040 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghd8z\" (UniqueName: \"kubernetes.io/projected/07ca9dc7-1003-4ff6-a82e-fb4a9f439adf-kube-api-access-ghd8z\") pod \"coredns-7c65d6cfc9-gbgt2\" (UID: \"07ca9dc7-1003-4ff6-a82e-fb4a9f439adf\") " pod="kube-system/coredns-7c65d6cfc9-gbgt2" Aug 13 00:27:06.770091 kubelet[3138]: I0813 00:27:06.770074 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phq6t\" (UniqueName: \"kubernetes.io/projected/aa31547e-de3b-4b87-924f-0f8c838fc98c-kube-api-access-phq6t\") pod \"coredns-7c65d6cfc9-2hjpw\" (UID: \"aa31547e-de3b-4b87-924f-0f8c838fc98c\") " pod="kube-system/coredns-7c65d6cfc9-2hjpw" Aug 13 00:27:06.770348 kubelet[3138]: I0813 00:27:06.770099 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa31547e-de3b-4b87-924f-0f8c838fc98c-config-volume\") pod \"coredns-7c65d6cfc9-2hjpw\" (UID: \"aa31547e-de3b-4b87-924f-0f8c838fc98c\") " pod="kube-system/coredns-7c65d6cfc9-2hjpw" Aug 13 00:27:07.007931 containerd[1741]: time="2025-08-13T00:27:07.007858236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gbgt2,Uid:07ca9dc7-1003-4ff6-a82e-fb4a9f439adf,Namespace:kube-system,Attempt:0,}" Aug 13 00:27:07.017737 containerd[1741]: time="2025-08-13T00:27:07.017715949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2hjpw,Uid:aa31547e-de3b-4b87-924f-0f8c838fc98c,Namespace:kube-system,Attempt:0,}" Aug 13 00:27:08.540870 systemd-networkd[1583]: cilium_host: Link UP Aug 13 00:27:08.542811 systemd-networkd[1583]: cilium_net: Link UP Aug 13 00:27:08.543633 systemd-networkd[1583]: cilium_net: Gained carrier Aug 13 00:27:08.543737 systemd-networkd[1583]: cilium_host: Gained carrier Aug 13 00:27:08.808125 systemd-networkd[1583]: cilium_net: Gained IPv6LL Aug 13 00:27:08.815494 systemd-networkd[1583]: cilium_vxlan: Link UP Aug 13 00:27:08.815501 systemd-networkd[1583]: cilium_vxlan: Gained carrier Aug 13 00:27:08.952105 systemd-networkd[1583]: cilium_host: Gained IPv6LL Aug 13 00:27:09.067353 kernel: NET: Registered PF_ALG protocol family Aug 13 00:27:09.568395 systemd-networkd[1583]: lxc_health: Link UP Aug 13 00:27:09.568655 systemd-networkd[1583]: lxc_health: Gained carrier Aug 13 00:27:10.035089 systemd-networkd[1583]: lxce3c3d479dfab: Link UP Aug 13 00:27:10.040051 kernel: eth0: renamed from tmp90db1 Aug 13 00:27:10.043637 systemd-networkd[1583]: lxce3c3d479dfab: Gained carrier Aug 13 00:27:10.075950 systemd-networkd[1583]: lxc2b6f76995f3a: Link UP Aug 13 00:27:10.078050 kernel: eth0: renamed from tmp4b479 Aug 13 00:27:10.079440 systemd-networkd[1583]: lxc2b6f76995f3a: Gained carrier Aug 13 00:27:10.313121 systemd-networkd[1583]: cilium_vxlan: Gained IPv6LL Aug 13 00:27:11.208275 systemd-networkd[1583]: lxce3c3d479dfab: Gained IPv6LL Aug 13 00:27:11.400782 kubelet[3138]: I0813 00:27:11.400304 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tpxqk" podStartSLOduration=13.339162702 podStartE2EDuration="18.400283737s" podCreationTimestamp="2025-08-13 00:26:53 +0000 UTC" firstStartedPulling="2025-08-13 00:26:53.508101616 +0000 UTC m=+6.242405433" lastFinishedPulling="2025-08-13 00:26:58.569222657 +0000 UTC m=+11.303526468" observedRunningTime="2025-08-13 00:27:07.46051088 +0000 UTC m=+20.194814697" watchObservedRunningTime="2025-08-13 00:27:11.400283737 +0000 UTC m=+24.134587676" Aug 13 00:27:11.464141 systemd-networkd[1583]: lxc_health: Gained IPv6LL Aug 13 00:27:11.848232 systemd-networkd[1583]: lxc2b6f76995f3a: Gained IPv6LL Aug 13 00:27:12.750682 containerd[1741]: time="2025-08-13T00:27:12.750175835Z" level=info msg="connecting to shim 90db1ff5f4f27bb45fd304c5708c7def9f5491150abf2b9a7e8f382b5ca054a0" address="unix:///run/containerd/s/2bd96f7e0b60d94e472b1e4c93a560b84a008f9291b3dd17444818cfcae41acd" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:27:12.777888 containerd[1741]: time="2025-08-13T00:27:12.777863775Z" level=info msg="connecting to shim 4b479eb28f983f2db281717e1779d898086250940ada7e83fa278f11945aa446" address="unix:///run/containerd/s/8877f383187cf41a770ccc77f483d0bc065c0604feb435ef6a83c7c34b997ebe" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:27:12.784180 systemd[1]: Started cri-containerd-90db1ff5f4f27bb45fd304c5708c7def9f5491150abf2b9a7e8f382b5ca054a0.scope - libcontainer container 90db1ff5f4f27bb45fd304c5708c7def9f5491150abf2b9a7e8f382b5ca054a0. Aug 13 00:27:12.812382 systemd[1]: Started cri-containerd-4b479eb28f983f2db281717e1779d898086250940ada7e83fa278f11945aa446.scope - libcontainer container 4b479eb28f983f2db281717e1779d898086250940ada7e83fa278f11945aa446. Aug 13 00:27:12.860166 containerd[1741]: time="2025-08-13T00:27:12.860132033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gbgt2,Uid:07ca9dc7-1003-4ff6-a82e-fb4a9f439adf,Namespace:kube-system,Attempt:0,} returns sandbox id \"90db1ff5f4f27bb45fd304c5708c7def9f5491150abf2b9a7e8f382b5ca054a0\"" Aug 13 00:27:12.863628 containerd[1741]: time="2025-08-13T00:27:12.863552396Z" level=info msg="CreateContainer within sandbox \"90db1ff5f4f27bb45fd304c5708c7def9f5491150abf2b9a7e8f382b5ca054a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:27:12.866542 containerd[1741]: time="2025-08-13T00:27:12.866511762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2hjpw,Uid:aa31547e-de3b-4b87-924f-0f8c838fc98c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b479eb28f983f2db281717e1779d898086250940ada7e83fa278f11945aa446\"" Aug 13 00:27:12.869433 containerd[1741]: time="2025-08-13T00:27:12.869411492Z" level=info msg="CreateContainer within sandbox \"4b479eb28f983f2db281717e1779d898086250940ada7e83fa278f11945aa446\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:27:12.885708 containerd[1741]: time="2025-08-13T00:27:12.885144204Z" level=info msg="Container 0e3a9b8eb9185f5a766bcedc73886dfb4c4d3df032ea302abf08f1b772584bfd: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:27:12.889544 containerd[1741]: time="2025-08-13T00:27:12.889526277Z" level=info msg="Container 3b6f758c0220cff9f63dc0b854db6ed7df9777ab7b8d7b2bbba76b83e2ab4c94: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:27:12.901330 containerd[1741]: time="2025-08-13T00:27:12.901302548Z" level=info msg="CreateContainer within sandbox \"90db1ff5f4f27bb45fd304c5708c7def9f5491150abf2b9a7e8f382b5ca054a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e3a9b8eb9185f5a766bcedc73886dfb4c4d3df032ea302abf08f1b772584bfd\"" Aug 13 00:27:12.901698 containerd[1741]: time="2025-08-13T00:27:12.901675975Z" level=info msg="StartContainer for \"0e3a9b8eb9185f5a766bcedc73886dfb4c4d3df032ea302abf08f1b772584bfd\"" Aug 13 00:27:12.902484 containerd[1741]: time="2025-08-13T00:27:12.902447452Z" level=info msg="connecting to shim 0e3a9b8eb9185f5a766bcedc73886dfb4c4d3df032ea302abf08f1b772584bfd" address="unix:///run/containerd/s/2bd96f7e0b60d94e472b1e4c93a560b84a008f9291b3dd17444818cfcae41acd" protocol=ttrpc version=3 Aug 13 00:27:12.904780 containerd[1741]: time="2025-08-13T00:27:12.904169264Z" level=info msg="CreateContainer within sandbox \"4b479eb28f983f2db281717e1779d898086250940ada7e83fa278f11945aa446\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b6f758c0220cff9f63dc0b854db6ed7df9777ab7b8d7b2bbba76b83e2ab4c94\"" Aug 13 00:27:12.904780 containerd[1741]: time="2025-08-13T00:27:12.904600764Z" level=info msg="StartContainer for \"3b6f758c0220cff9f63dc0b854db6ed7df9777ab7b8d7b2bbba76b83e2ab4c94\"" Aug 13 00:27:12.905386 containerd[1741]: time="2025-08-13T00:27:12.905360235Z" level=info msg="connecting to shim 3b6f758c0220cff9f63dc0b854db6ed7df9777ab7b8d7b2bbba76b83e2ab4c94" address="unix:///run/containerd/s/8877f383187cf41a770ccc77f483d0bc065c0604feb435ef6a83c7c34b997ebe" protocol=ttrpc version=3 Aug 13 00:27:12.923139 systemd[1]: Started cri-containerd-0e3a9b8eb9185f5a766bcedc73886dfb4c4d3df032ea302abf08f1b772584bfd.scope - libcontainer container 0e3a9b8eb9185f5a766bcedc73886dfb4c4d3df032ea302abf08f1b772584bfd. Aug 13 00:27:12.926363 systemd[1]: Started cri-containerd-3b6f758c0220cff9f63dc0b854db6ed7df9777ab7b8d7b2bbba76b83e2ab4c94.scope - libcontainer container 3b6f758c0220cff9f63dc0b854db6ed7df9777ab7b8d7b2bbba76b83e2ab4c94. Aug 13 00:27:12.954472 containerd[1741]: time="2025-08-13T00:27:12.954233644Z" level=info msg="StartContainer for \"0e3a9b8eb9185f5a766bcedc73886dfb4c4d3df032ea302abf08f1b772584bfd\" returns successfully" Aug 13 00:27:12.958153 containerd[1741]: time="2025-08-13T00:27:12.958131223Z" level=info msg="StartContainer for \"3b6f758c0220cff9f63dc0b854db6ed7df9777ab7b8d7b2bbba76b83e2ab4c94\" returns successfully" Aug 13 00:27:13.459261 kubelet[3138]: I0813 00:27:13.459105 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gbgt2" podStartSLOduration=20.459089214 podStartE2EDuration="20.459089214s" podCreationTimestamp="2025-08-13 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:27:13.458954324 +0000 UTC m=+26.193258148" watchObservedRunningTime="2025-08-13 00:27:13.459089214 +0000 UTC m=+26.193393024" Aug 13 00:27:13.503229 kubelet[3138]: I0813 00:27:13.503189 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2hjpw" podStartSLOduration=20.503175488 podStartE2EDuration="20.503175488s" podCreationTimestamp="2025-08-13 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:27:13.502923858 +0000 UTC m=+26.237227671" watchObservedRunningTime="2025-08-13 00:27:13.503175488 +0000 UTC m=+26.237479298" Aug 13 00:27:15.995292 kubelet[3138]: I0813 00:27:15.995199 3138 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:28:24.977430 systemd[1]: Started sshd@7-10.200.8.5:22-10.200.16.10:41548.service - OpenSSH per-connection server daemon (10.200.16.10:41548). Aug 13 00:28:25.602837 sshd[4459]: Accepted publickey for core from 10.200.16.10 port 41548 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:28:25.603957 sshd-session[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:25.608388 systemd-logind[1696]: New session 10 of user core. Aug 13 00:28:25.617197 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:28:26.109740 sshd[4461]: Connection closed by 10.200.16.10 port 41548 Aug 13 00:28:26.110209 sshd-session[4459]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:26.112998 systemd[1]: sshd@7-10.200.8.5:22-10.200.16.10:41548.service: Deactivated successfully. Aug 13 00:28:26.114849 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:28:26.115684 systemd-logind[1696]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:28:26.117211 systemd-logind[1696]: Removed session 10. Aug 13 00:28:31.221133 systemd[1]: Started sshd@8-10.200.8.5:22-10.200.16.10:56874.service - OpenSSH per-connection server daemon (10.200.16.10:56874). Aug 13 00:28:31.848476 sshd[4474]: Accepted publickey for core from 10.200.16.10 port 56874 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:28:31.849735 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:31.853848 systemd-logind[1696]: New session 11 of user core. Aug 13 00:28:31.865166 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:28:32.338067 sshd[4476]: Connection closed by 10.200.16.10 port 56874 Aug 13 00:28:32.338517 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:32.341392 systemd[1]: sshd@8-10.200.8.5:22-10.200.16.10:56874.service: Deactivated successfully. Aug 13 00:28:32.343004 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:28:32.343858 systemd-logind[1696]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:28:32.344935 systemd-logind[1696]: Removed session 11. Aug 13 00:28:37.453097 systemd[1]: Started sshd@9-10.200.8.5:22-10.200.16.10:56878.service - OpenSSH per-connection server daemon (10.200.16.10:56878). Aug 13 00:28:38.083692 sshd[4489]: Accepted publickey for core from 10.200.16.10 port 56878 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:28:38.084822 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:38.088933 systemd-logind[1696]: New session 12 of user core. Aug 13 00:28:38.096166 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:28:38.578142 sshd[4491]: Connection closed by 10.200.16.10 port 56878 Aug 13 00:28:38.578600 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:38.581066 systemd[1]: sshd@9-10.200.8.5:22-10.200.16.10:56878.service: Deactivated successfully. Aug 13 00:28:38.582760 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:28:38.584947 systemd-logind[1696]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:28:38.585992 systemd-logind[1696]: Removed session 12. Aug 13 00:28:43.690201 systemd[1]: Started sshd@10-10.200.8.5:22-10.200.16.10:35340.service - OpenSSH per-connection server daemon (10.200.16.10:35340). Aug 13 00:28:44.317889 sshd[4504]: Accepted publickey for core from 10.200.16.10 port 35340 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:28:44.319251 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:44.323666 systemd-logind[1696]: New session 13 of user core. Aug 13 00:28:44.328200 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:28:44.807059 sshd[4506]: Connection closed by 10.200.16.10 port 35340 Aug 13 00:28:44.807501 sshd-session[4504]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:44.809846 systemd[1]: sshd@10-10.200.8.5:22-10.200.16.10:35340.service: Deactivated successfully. Aug 13 00:28:44.811510 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:28:44.813343 systemd-logind[1696]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:28:44.814159 systemd-logind[1696]: Removed session 13. Aug 13 00:28:49.923977 systemd[1]: Started sshd@11-10.200.8.5:22-10.200.16.10:35346.service - OpenSSH per-connection server daemon (10.200.16.10:35346). Aug 13 00:28:50.551774 sshd[4521]: Accepted publickey for core from 10.200.16.10 port 35346 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:28:50.552846 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:50.557088 systemd-logind[1696]: New session 14 of user core. Aug 13 00:28:50.566154 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:28:51.037162 sshd[4523]: Connection closed by 10.200.16.10 port 35346 Aug 13 00:28:51.037659 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:51.040662 systemd[1]: sshd@11-10.200.8.5:22-10.200.16.10:35346.service: Deactivated successfully. Aug 13 00:28:51.042402 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:28:51.043138 systemd-logind[1696]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:28:51.044594 systemd-logind[1696]: Removed session 14. Aug 13 00:28:56.148742 systemd[1]: Started sshd@12-10.200.8.5:22-10.200.16.10:40004.service - OpenSSH per-connection server daemon (10.200.16.10:40004). Aug 13 00:28:56.778638 sshd[4538]: Accepted publickey for core from 10.200.16.10 port 40004 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:28:56.779726 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:56.784035 systemd-logind[1696]: New session 15 of user core. Aug 13 00:28:56.787147 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:28:57.270401 sshd[4540]: Connection closed by 10.200.16.10 port 40004 Aug 13 00:28:57.270902 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:57.273881 systemd[1]: sshd@12-10.200.8.5:22-10.200.16.10:40004.service: Deactivated successfully. Aug 13 00:28:57.275603 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:28:57.276445 systemd-logind[1696]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:28:57.277637 systemd-logind[1696]: Removed session 15. Aug 13 00:28:57.380876 systemd[1]: Started sshd@13-10.200.8.5:22-10.200.16.10:40014.service - OpenSSH per-connection server daemon (10.200.16.10:40014). Aug 13 00:28:58.007688 sshd[4553]: Accepted publickey for core from 10.200.16.10 port 40014 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:28:58.008812 sshd-session[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:58.013199 systemd-logind[1696]: New session 16 of user core. Aug 13 00:28:58.018169 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:28:58.519898 sshd[4555]: Connection closed by 10.200.16.10 port 40014 Aug 13 00:28:58.520409 sshd-session[4553]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:58.523241 systemd[1]: sshd@13-10.200.8.5:22-10.200.16.10:40014.service: Deactivated successfully. Aug 13 00:28:58.525223 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:28:58.527303 systemd-logind[1696]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:28:58.528321 systemd-logind[1696]: Removed session 16. Aug 13 00:28:58.629928 systemd[1]: Started sshd@14-10.200.8.5:22-10.200.16.10:40018.service - OpenSSH per-connection server daemon (10.200.16.10:40018). Aug 13 00:28:59.257095 sshd[4565]: Accepted publickey for core from 10.200.16.10 port 40018 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:28:59.258162 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:59.262087 systemd-logind[1696]: New session 17 of user core. Aug 13 00:28:59.268151 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:28:59.753869 sshd[4567]: Connection closed by 10.200.16.10 port 40018 Aug 13 00:28:59.754376 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:59.758594 systemd[1]: sshd@14-10.200.8.5:22-10.200.16.10:40018.service: Deactivated successfully. Aug 13 00:28:59.760661 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:28:59.761416 systemd-logind[1696]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:28:59.762761 systemd-logind[1696]: Removed session 17. Aug 13 00:29:04.865309 systemd[1]: Started sshd@15-10.200.8.5:22-10.200.16.10:43926.service - OpenSSH per-connection server daemon (10.200.16.10:43926). Aug 13 00:29:05.492831 sshd[4579]: Accepted publickey for core from 10.200.16.10 port 43926 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:05.493991 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:05.498419 systemd-logind[1696]: New session 18 of user core. Aug 13 00:29:05.501173 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:29:05.978487 sshd[4581]: Connection closed by 10.200.16.10 port 43926 Aug 13 00:29:05.978960 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:05.982076 systemd[1]: sshd@15-10.200.8.5:22-10.200.16.10:43926.service: Deactivated successfully. Aug 13 00:29:05.983874 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:29:05.984655 systemd-logind[1696]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:29:05.985585 systemd-logind[1696]: Removed session 18. Aug 13 00:29:06.089748 systemd[1]: Started sshd@16-10.200.8.5:22-10.200.16.10:43932.service - OpenSSH per-connection server daemon (10.200.16.10:43932). Aug 13 00:29:06.716239 sshd[4593]: Accepted publickey for core from 10.200.16.10 port 43932 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:06.717260 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:06.721424 systemd-logind[1696]: New session 19 of user core. Aug 13 00:29:06.726196 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:29:07.269955 sshd[4595]: Connection closed by 10.200.16.10 port 43932 Aug 13 00:29:07.271814 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:07.274223 systemd[1]: sshd@16-10.200.8.5:22-10.200.16.10:43932.service: Deactivated successfully. Aug 13 00:29:07.276118 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:29:07.277344 systemd-logind[1696]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:29:07.278656 systemd-logind[1696]: Removed session 19. Aug 13 00:29:07.382592 systemd[1]: Started sshd@17-10.200.8.5:22-10.200.16.10:43944.service - OpenSSH per-connection server daemon (10.200.16.10:43944). Aug 13 00:29:08.008444 sshd[4604]: Accepted publickey for core from 10.200.16.10 port 43944 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:08.009589 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:08.014033 systemd-logind[1696]: New session 20 of user core. Aug 13 00:29:08.019197 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:29:09.455499 sshd[4606]: Connection closed by 10.200.16.10 port 43944 Aug 13 00:29:09.455983 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:09.458347 systemd[1]: sshd@17-10.200.8.5:22-10.200.16.10:43944.service: Deactivated successfully. Aug 13 00:29:09.460026 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:29:09.461844 systemd-logind[1696]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:29:09.463261 systemd-logind[1696]: Removed session 20. Aug 13 00:29:09.569448 systemd[1]: Started sshd@18-10.200.8.5:22-10.200.16.10:43956.service - OpenSSH per-connection server daemon (10.200.16.10:43956). Aug 13 00:29:10.195539 sshd[4623]: Accepted publickey for core from 10.200.16.10 port 43956 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:10.196607 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:10.200878 systemd-logind[1696]: New session 21 of user core. Aug 13 00:29:10.206140 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:29:10.760760 sshd[4625]: Connection closed by 10.200.16.10 port 43956 Aug 13 00:29:10.761295 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:10.763779 systemd[1]: sshd@18-10.200.8.5:22-10.200.16.10:43956.service: Deactivated successfully. Aug 13 00:29:10.765422 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:29:10.766735 systemd-logind[1696]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:29:10.768041 systemd-logind[1696]: Removed session 21. Aug 13 00:29:10.871467 systemd[1]: Started sshd@19-10.200.8.5:22-10.200.16.10:34170.service - OpenSSH per-connection server daemon (10.200.16.10:34170). Aug 13 00:29:11.495838 sshd[4634]: Accepted publickey for core from 10.200.16.10 port 34170 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:11.496970 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:11.501266 systemd-logind[1696]: New session 22 of user core. Aug 13 00:29:11.503202 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:29:11.988960 sshd[4636]: Connection closed by 10.200.16.10 port 34170 Aug 13 00:29:11.989371 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:11.992067 systemd[1]: sshd@19-10.200.8.5:22-10.200.16.10:34170.service: Deactivated successfully. Aug 13 00:29:11.993744 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:29:11.994633 systemd-logind[1696]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:29:11.995851 systemd-logind[1696]: Removed session 22. Aug 13 00:29:17.104785 systemd[1]: Started sshd@20-10.200.8.5:22-10.200.16.10:34172.service - OpenSSH per-connection server daemon (10.200.16.10:34172). Aug 13 00:29:17.733520 sshd[4651]: Accepted publickey for core from 10.200.16.10 port 34172 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:17.734645 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:17.739049 systemd-logind[1696]: New session 23 of user core. Aug 13 00:29:17.742179 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:29:18.221474 sshd[4653]: Connection closed by 10.200.16.10 port 34172 Aug 13 00:29:18.221958 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:18.224945 systemd[1]: sshd@20-10.200.8.5:22-10.200.16.10:34172.service: Deactivated successfully. Aug 13 00:29:18.226797 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:29:18.227640 systemd-logind[1696]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:29:18.228811 systemd-logind[1696]: Removed session 23. Aug 13 00:29:23.340938 systemd[1]: Started sshd@21-10.200.8.5:22-10.200.16.10:35602.service - OpenSSH per-connection server daemon (10.200.16.10:35602). Aug 13 00:29:23.971995 sshd[4665]: Accepted publickey for core from 10.200.16.10 port 35602 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:23.973172 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:23.977087 systemd-logind[1696]: New session 24 of user core. Aug 13 00:29:23.982193 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:29:24.467108 sshd[4669]: Connection closed by 10.200.16.10 port 35602 Aug 13 00:29:24.467533 sshd-session[4665]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:24.470313 systemd[1]: sshd@21-10.200.8.5:22-10.200.16.10:35602.service: Deactivated successfully. Aug 13 00:29:24.471861 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:29:24.472565 systemd-logind[1696]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:29:24.473508 systemd-logind[1696]: Removed session 24. Aug 13 00:29:29.577760 systemd[1]: Started sshd@22-10.200.8.5:22-10.200.16.10:35616.service - OpenSSH per-connection server daemon (10.200.16.10:35616). Aug 13 00:29:30.202321 sshd[4681]: Accepted publickey for core from 10.200.16.10 port 35616 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:30.203314 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:30.207136 systemd-logind[1696]: New session 25 of user core. Aug 13 00:29:30.216167 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:29:30.689775 sshd[4683]: Connection closed by 10.200.16.10 port 35616 Aug 13 00:29:30.690255 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:30.692742 systemd[1]: sshd@22-10.200.8.5:22-10.200.16.10:35616.service: Deactivated successfully. Aug 13 00:29:30.694618 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:29:30.696499 systemd-logind[1696]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:29:30.697405 systemd-logind[1696]: Removed session 25. Aug 13 00:29:30.801770 systemd[1]: Started sshd@23-10.200.8.5:22-10.200.16.10:42376.service - OpenSSH per-connection server daemon (10.200.16.10:42376). Aug 13 00:29:31.428523 sshd[4695]: Accepted publickey for core from 10.200.16.10 port 42376 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:31.429650 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:31.434005 systemd-logind[1696]: New session 26 of user core. Aug 13 00:29:31.438147 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:29:33.053211 containerd[1741]: time="2025-08-13T00:29:33.053163276Z" level=info msg="StopContainer for \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" with timeout 30 (s)" Aug 13 00:29:33.055179 containerd[1741]: time="2025-08-13T00:29:33.055148648Z" level=info msg="Stop container \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" with signal terminated" Aug 13 00:29:33.066701 containerd[1741]: time="2025-08-13T00:29:33.066668553Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:29:33.066973 systemd[1]: cri-containerd-2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9.scope: Deactivated successfully. Aug 13 00:29:33.069385 containerd[1741]: time="2025-08-13T00:29:33.069297451Z" level=info msg="received exit event container_id:\"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" id:\"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" pid:3659 exited_at:{seconds:1755044973 nanos:69094645}" Aug 13 00:29:33.069672 containerd[1741]: time="2025-08-13T00:29:33.069538669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" id:\"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" pid:3659 exited_at:{seconds:1755044973 nanos:69094645}" Aug 13 00:29:33.072772 containerd[1741]: time="2025-08-13T00:29:33.072747912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" id:\"f6c1a08db208a435fe991b58c4562aa27da181601cd94b96579330c4514813d9\" pid:4716 exited_at:{seconds:1755044973 nanos:72604437}" Aug 13 00:29:33.074470 containerd[1741]: time="2025-08-13T00:29:33.074437854Z" level=info msg="StopContainer for \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" with timeout 2 (s)" Aug 13 00:29:33.074847 containerd[1741]: time="2025-08-13T00:29:33.074813467Z" level=info msg="Stop container \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" with signal terminated" Aug 13 00:29:33.085121 systemd-networkd[1583]: lxc_health: Link DOWN Aug 13 00:29:33.085873 systemd-networkd[1583]: lxc_health: Lost carrier Aug 13 00:29:33.095962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9-rootfs.mount: Deactivated successfully. Aug 13 00:29:33.101284 systemd[1]: cri-containerd-b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28.scope: Deactivated successfully. Aug 13 00:29:33.101643 systemd[1]: cri-containerd-b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28.scope: Consumed 4.896s CPU time, 125.4M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 00:29:33.103511 containerd[1741]: time="2025-08-13T00:29:33.103491323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" id:\"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" pid:3769 exited_at:{seconds:1755044973 nanos:103324167}" Aug 13 00:29:33.103511 containerd[1741]: time="2025-08-13T00:29:33.103503152Z" level=info msg="received exit event container_id:\"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" id:\"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" pid:3769 exited_at:{seconds:1755044973 nanos:103324167}" Aug 13 00:29:33.116444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28-rootfs.mount: Deactivated successfully. Aug 13 00:29:33.143821 containerd[1741]: time="2025-08-13T00:29:33.143797532Z" level=info msg="StopContainer for \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" returns successfully" Aug 13 00:29:33.144291 containerd[1741]: time="2025-08-13T00:29:33.144246083Z" level=info msg="StopPodSandbox for \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\"" Aug 13 00:29:33.144388 containerd[1741]: time="2025-08-13T00:29:33.144375150Z" level=info msg="Container to stop \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:29:33.145653 containerd[1741]: time="2025-08-13T00:29:33.145633150Z" level=info msg="StopContainer for \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" returns successfully" Aug 13 00:29:33.146100 containerd[1741]: time="2025-08-13T00:29:33.146075690Z" level=info msg="StopPodSandbox for \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\"" Aug 13 00:29:33.146171 containerd[1741]: time="2025-08-13T00:29:33.146130266Z" level=info msg="Container to stop \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:29:33.146171 containerd[1741]: time="2025-08-13T00:29:33.146141849Z" level=info msg="Container to stop \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:29:33.146171 containerd[1741]: time="2025-08-13T00:29:33.146151964Z" level=info msg="Container to stop \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:29:33.146171 containerd[1741]: time="2025-08-13T00:29:33.146160718Z" level=info msg="Container to stop \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:29:33.146171 containerd[1741]: time="2025-08-13T00:29:33.146169114Z" level=info msg="Container to stop \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:29:33.151776 systemd[1]: cri-containerd-4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692.scope: Deactivated successfully. Aug 13 00:29:33.152804 systemd[1]: cri-containerd-d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451.scope: Deactivated successfully. Aug 13 00:29:33.154616 containerd[1741]: time="2025-08-13T00:29:33.154585414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" id:\"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" pid:3282 exit_status:137 exited_at:{seconds:1755044973 nanos:154201056}" Aug 13 00:29:33.178160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451-rootfs.mount: Deactivated successfully. Aug 13 00:29:33.181825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692-rootfs.mount: Deactivated successfully. Aug 13 00:29:33.201287 containerd[1741]: time="2025-08-13T00:29:33.201247198Z" level=info msg="shim disconnected" id=4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692 namespace=k8s.io Aug 13 00:29:33.201287 containerd[1741]: time="2025-08-13T00:29:33.201287289Z" level=warning msg="cleaning up after shim disconnected" id=4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692 namespace=k8s.io Aug 13 00:29:33.201432 containerd[1741]: time="2025-08-13T00:29:33.201295793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:29:33.203152 containerd[1741]: time="2025-08-13T00:29:33.202912847Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/containerd.sock.ttrpc->@: write: broken pipe" Aug 13 00:29:33.203673 containerd[1741]: time="2025-08-13T00:29:33.203545712Z" level=info msg="shim disconnected" id=d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451 namespace=k8s.io Aug 13 00:29:33.203673 containerd[1741]: time="2025-08-13T00:29:33.203566830Z" level=warning msg="cleaning up after shim disconnected" id=d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451 namespace=k8s.io Aug 13 00:29:33.203673 containerd[1741]: time="2025-08-13T00:29:33.203573624Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:29:33.212956 containerd[1741]: time="2025-08-13T00:29:33.212935244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" id:\"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" pid:3338 exit_status:137 exited_at:{seconds:1755044973 nanos:157280975}" Aug 13 00:29:33.213439 containerd[1741]: time="2025-08-13T00:29:33.213416730Z" level=info msg="received exit event sandbox_id:\"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" exit_status:137 exited_at:{seconds:1755044973 nanos:154201056}" Aug 13 00:29:33.215448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451-shm.mount: Deactivated successfully. Aug 13 00:29:33.216318 containerd[1741]: time="2025-08-13T00:29:33.216253660Z" level=info msg="TearDown network for sandbox \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" successfully" Aug 13 00:29:33.216761 containerd[1741]: time="2025-08-13T00:29:33.216294976Z" level=info msg="StopPodSandbox for \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" returns successfully" Aug 13 00:29:33.216761 containerd[1741]: time="2025-08-13T00:29:33.216334848Z" level=info msg="TearDown network for sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" successfully" Aug 13 00:29:33.216761 containerd[1741]: time="2025-08-13T00:29:33.216449234Z" level=info msg="StopPodSandbox for \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" returns successfully" Aug 13 00:29:33.216761 containerd[1741]: time="2025-08-13T00:29:33.216368579Z" level=info msg="received exit event sandbox_id:\"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" exit_status:137 exited_at:{seconds:1755044973 nanos:157280975}" Aug 13 00:29:33.281102 kubelet[3138]: I0813 00:29:33.281075 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-host-proc-sys-kernel\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281102 kubelet[3138]: I0813 00:29:33.281105 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-run\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281416 kubelet[3138]: I0813 00:29:33.281130 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10ce67f0-3906-4fa3-9791-529f46ef09da-clustermesh-secrets\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281416 kubelet[3138]: I0813 00:29:33.281147 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-etc-cni-netd\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281416 kubelet[3138]: I0813 00:29:33.281164 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-host-proc-sys-net\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281416 kubelet[3138]: I0813 00:29:33.281181 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-cgroup\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281416 kubelet[3138]: I0813 00:29:33.281199 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-bpf-maps\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281416 kubelet[3138]: I0813 00:29:33.281213 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-hostproc\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281582 kubelet[3138]: I0813 00:29:33.281232 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfhns\" (UniqueName: \"kubernetes.io/projected/9039f2c4-9a0e-4479-abae-4ffb398676aa-kube-api-access-nfhns\") pod \"9039f2c4-9a0e-4479-abae-4ffb398676aa\" (UID: \"9039f2c4-9a0e-4479-abae-4ffb398676aa\") " Aug 13 00:29:33.281582 kubelet[3138]: I0813 00:29:33.281249 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-xtables-lock\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281582 kubelet[3138]: I0813 00:29:33.281277 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vltm2\" (UniqueName: \"kubernetes.io/projected/10ce67f0-3906-4fa3-9791-529f46ef09da-kube-api-access-vltm2\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281582 kubelet[3138]: I0813 00:29:33.281293 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-lib-modules\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281582 kubelet[3138]: I0813 00:29:33.281309 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cni-path\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281582 kubelet[3138]: I0813 00:29:33.281326 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10ce67f0-3906-4fa3-9791-529f46ef09da-hubble-tls\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281722 kubelet[3138]: I0813 00:29:33.281349 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9039f2c4-9a0e-4479-abae-4ffb398676aa-cilium-config-path\") pod \"9039f2c4-9a0e-4479-abae-4ffb398676aa\" (UID: \"9039f2c4-9a0e-4479-abae-4ffb398676aa\") " Aug 13 00:29:33.281722 kubelet[3138]: I0813 00:29:33.281368 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-config-path\") pod \"10ce67f0-3906-4fa3-9791-529f46ef09da\" (UID: \"10ce67f0-3906-4fa3-9791-529f46ef09da\") " Aug 13 00:29:33.281865 kubelet[3138]: I0813 00:29:33.281841 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-hostproc" (OuterVolumeSpecName: "hostproc") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.281899 kubelet[3138]: I0813 00:29:33.281882 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.281926 kubelet[3138]: I0813 00:29:33.281899 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.282428 kubelet[3138]: I0813 00:29:33.282374 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.282428 kubelet[3138]: I0813 00:29:33.282404 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.284047 kubelet[3138]: I0813 00:29:33.283688 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.284047 kubelet[3138]: I0813 00:29:33.283730 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.284047 kubelet[3138]: I0813 00:29:33.283734 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.285340 kubelet[3138]: I0813 00:29:33.285311 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.286077 kubelet[3138]: I0813 00:29:33.285997 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cni-path" (OuterVolumeSpecName: "cni-path") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:29:33.286145 kubelet[3138]: I0813 00:29:33.286080 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10ce67f0-3906-4fa3-9791-529f46ef09da-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:29:33.287395 kubelet[3138]: I0813 00:29:33.287359 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9039f2c4-9a0e-4479-abae-4ffb398676aa-kube-api-access-nfhns" (OuterVolumeSpecName: "kube-api-access-nfhns") pod "9039f2c4-9a0e-4479-abae-4ffb398676aa" (UID: "9039f2c4-9a0e-4479-abae-4ffb398676aa"). InnerVolumeSpecName "kube-api-access-nfhns". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:29:33.288007 kubelet[3138]: I0813 00:29:33.287988 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:29:33.288647 kubelet[3138]: I0813 00:29:33.288557 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ce67f0-3906-4fa3-9791-529f46ef09da-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:29:33.288647 kubelet[3138]: I0813 00:29:33.288593 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ce67f0-3906-4fa3-9791-529f46ef09da-kube-api-access-vltm2" (OuterVolumeSpecName: "kube-api-access-vltm2") pod "10ce67f0-3906-4fa3-9791-529f46ef09da" (UID: "10ce67f0-3906-4fa3-9791-529f46ef09da"). InnerVolumeSpecName "kube-api-access-vltm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:29:33.289471 kubelet[3138]: I0813 00:29:33.289449 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9039f2c4-9a0e-4479-abae-4ffb398676aa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9039f2c4-9a0e-4479-abae-4ffb398676aa" (UID: "9039f2c4-9a0e-4479-abae-4ffb398676aa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:29:33.360464 systemd[1]: Removed slice kubepods-besteffort-pod9039f2c4_9a0e_4479_abae_4ffb398676aa.slice - libcontainer container kubepods-besteffort-pod9039f2c4_9a0e_4479_abae_4ffb398676aa.slice. Aug 13 00:29:33.361786 systemd[1]: Removed slice kubepods-burstable-pod10ce67f0_3906_4fa3_9791_529f46ef09da.slice - libcontainer container kubepods-burstable-pod10ce67f0_3906_4fa3_9791_529f46ef09da.slice. Aug 13 00:29:33.361951 systemd[1]: kubepods-burstable-pod10ce67f0_3906_4fa3_9791_529f46ef09da.slice: Consumed 4.968s CPU time, 125.8M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 00:29:33.381895 kubelet[3138]: I0813 00:29:33.381872 3138 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cni-path\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381895 kubelet[3138]: I0813 00:29:33.381890 3138 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-config-path\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381987 kubelet[3138]: I0813 00:29:33.381901 3138 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10ce67f0-3906-4fa3-9791-529f46ef09da-hubble-tls\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381987 kubelet[3138]: I0813 00:29:33.381909 3138 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9039f2c4-9a0e-4479-abae-4ffb398676aa-cilium-config-path\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381987 kubelet[3138]: I0813 00:29:33.381916 3138 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10ce67f0-3906-4fa3-9791-529f46ef09da-clustermesh-secrets\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381987 kubelet[3138]: I0813 00:29:33.381925 3138 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-host-proc-sys-kernel\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381987 kubelet[3138]: I0813 00:29:33.381933 3138 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-run\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381987 kubelet[3138]: I0813 00:29:33.381941 3138 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-host-proc-sys-net\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381987 kubelet[3138]: I0813 00:29:33.381949 3138 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-etc-cni-netd\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.381987 kubelet[3138]: I0813 00:29:33.381957 3138 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-cilium-cgroup\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.382137 kubelet[3138]: I0813 00:29:33.381965 3138 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-bpf-maps\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.382137 kubelet[3138]: I0813 00:29:33.381973 3138 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-hostproc\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.382137 kubelet[3138]: I0813 00:29:33.381981 3138 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfhns\" (UniqueName: \"kubernetes.io/projected/9039f2c4-9a0e-4479-abae-4ffb398676aa-kube-api-access-nfhns\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.382137 kubelet[3138]: I0813 00:29:33.381991 3138 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-xtables-lock\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.382137 kubelet[3138]: I0813 00:29:33.381999 3138 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vltm2\" (UniqueName: \"kubernetes.io/projected/10ce67f0-3906-4fa3-9791-529f46ef09da-kube-api-access-vltm2\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.382137 kubelet[3138]: I0813 00:29:33.382009 3138 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10ce67f0-3906-4fa3-9791-529f46ef09da-lib-modules\") on node \"ci-4372.1.0-a-4baf804050\" DevicePath \"\"" Aug 13 00:29:33.665629 kubelet[3138]: I0813 00:29:33.665505 3138 scope.go:117] "RemoveContainer" containerID="b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28" Aug 13 00:29:33.668191 containerd[1741]: time="2025-08-13T00:29:33.668148216Z" level=info msg="RemoveContainer for \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\"" Aug 13 00:29:33.676648 containerd[1741]: time="2025-08-13T00:29:33.676607568Z" level=info msg="RemoveContainer for \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" returns successfully" Aug 13 00:29:33.676862 kubelet[3138]: I0813 00:29:33.676840 3138 scope.go:117] "RemoveContainer" containerID="b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478" Aug 13 00:29:33.678077 containerd[1741]: time="2025-08-13T00:29:33.678048990Z" level=info msg="RemoveContainer for \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\"" Aug 13 00:29:33.687028 containerd[1741]: time="2025-08-13T00:29:33.685609897Z" level=info msg="RemoveContainer for \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\" returns successfully" Aug 13 00:29:33.688214 kubelet[3138]: I0813 00:29:33.687725 3138 scope.go:117] "RemoveContainer" containerID="1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f" Aug 13 00:29:33.693968 containerd[1741]: time="2025-08-13T00:29:33.693948511Z" level=info msg="RemoveContainer for \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\"" Aug 13 00:29:33.701514 containerd[1741]: time="2025-08-13T00:29:33.701435928Z" level=info msg="RemoveContainer for \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\" returns successfully" Aug 13 00:29:33.701722 kubelet[3138]: I0813 00:29:33.701584 3138 scope.go:117] "RemoveContainer" containerID="c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9" Aug 13 00:29:33.703086 containerd[1741]: time="2025-08-13T00:29:33.703067188Z" level=info msg="RemoveContainer for \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\"" Aug 13 00:29:33.708521 containerd[1741]: time="2025-08-13T00:29:33.708482770Z" level=info msg="RemoveContainer for \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\" returns successfully" Aug 13 00:29:33.708699 kubelet[3138]: I0813 00:29:33.708649 3138 scope.go:117] "RemoveContainer" containerID="caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f" Aug 13 00:29:33.709873 containerd[1741]: time="2025-08-13T00:29:33.709852645Z" level=info msg="RemoveContainer for \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\"" Aug 13 00:29:33.715796 containerd[1741]: time="2025-08-13T00:29:33.715760147Z" level=info msg="RemoveContainer for \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\" returns successfully" Aug 13 00:29:33.715954 kubelet[3138]: I0813 00:29:33.715915 3138 scope.go:117] "RemoveContainer" containerID="b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28" Aug 13 00:29:33.716146 containerd[1741]: time="2025-08-13T00:29:33.716115011Z" level=error msg="ContainerStatus for \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\": not found" Aug 13 00:29:33.716264 kubelet[3138]: E0813 00:29:33.716237 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\": not found" containerID="b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28" Aug 13 00:29:33.716343 kubelet[3138]: I0813 00:29:33.716268 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28"} err="failed to get container status \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\": rpc error: code = NotFound desc = an error occurred when try to find container \"b88bad75eacdbd929fe136c978bcffab01ad571f5f4e697fbb2184fbbb286e28\": not found" Aug 13 00:29:33.716378 kubelet[3138]: I0813 00:29:33.716344 3138 scope.go:117] "RemoveContainer" containerID="b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478" Aug 13 00:29:33.716528 containerd[1741]: time="2025-08-13T00:29:33.716492103Z" level=error msg="ContainerStatus for \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\": not found" Aug 13 00:29:33.716638 kubelet[3138]: E0813 00:29:33.716605 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\": not found" containerID="b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478" Aug 13 00:29:33.716685 kubelet[3138]: I0813 00:29:33.716643 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478"} err="failed to get container status \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2f61fc94288de1032a2546a816bce575faa7c9709c56428ca7270f4dcf4f478\": not found" Aug 13 00:29:33.716685 kubelet[3138]: I0813 00:29:33.716660 3138 scope.go:117] "RemoveContainer" containerID="1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f" Aug 13 00:29:33.716822 containerd[1741]: time="2025-08-13T00:29:33.716793829Z" level=error msg="ContainerStatus for \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\": not found" Aug 13 00:29:33.716901 kubelet[3138]: E0813 00:29:33.716885 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\": not found" containerID="1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f" Aug 13 00:29:33.716935 kubelet[3138]: I0813 00:29:33.716908 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f"} err="failed to get container status \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1989210d1f910ff0abb43c61af11d7963710d3da1ee13192dbccca095f0ca87f\": not found" Aug 13 00:29:33.716935 kubelet[3138]: I0813 00:29:33.716923 3138 scope.go:117] "RemoveContainer" containerID="c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9" Aug 13 00:29:33.717070 containerd[1741]: time="2025-08-13T00:29:33.717047960Z" level=error msg="ContainerStatus for \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\": not found" Aug 13 00:29:33.717148 kubelet[3138]: E0813 00:29:33.717133 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\": not found" containerID="c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9" Aug 13 00:29:33.717181 kubelet[3138]: I0813 00:29:33.717152 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9"} err="failed to get container status \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7cdbe234af172cf43cfe8fafd7995c4a28be5d12a84518a9b32272a0739f0f9\": not found" Aug 13 00:29:33.717181 kubelet[3138]: I0813 00:29:33.717166 3138 scope.go:117] "RemoveContainer" containerID="caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f" Aug 13 00:29:33.717312 containerd[1741]: time="2025-08-13T00:29:33.717283650Z" level=error msg="ContainerStatus for \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\": not found" Aug 13 00:29:33.717447 kubelet[3138]: E0813 00:29:33.717391 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\": not found" containerID="caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f" Aug 13 00:29:33.717447 kubelet[3138]: I0813 00:29:33.717408 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f"} err="failed to get container status \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\": rpc error: code = NotFound desc = an error occurred when try to find container \"caf8716fb1be7e8894b55a5b1d495ff4dfca28731d37ed02b46e0a638623a79f\": not found" Aug 13 00:29:33.717447 kubelet[3138]: I0813 00:29:33.717421 3138 scope.go:117] "RemoveContainer" containerID="2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9" Aug 13 00:29:33.718631 containerd[1741]: time="2025-08-13T00:29:33.718612941Z" level=info msg="RemoveContainer for \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\"" Aug 13 00:29:33.724401 containerd[1741]: time="2025-08-13T00:29:33.724378917Z" level=info msg="RemoveContainer for \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" returns successfully" Aug 13 00:29:33.724540 kubelet[3138]: I0813 00:29:33.724528 3138 scope.go:117] "RemoveContainer" containerID="2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9" Aug 13 00:29:33.724733 containerd[1741]: time="2025-08-13T00:29:33.724703704Z" level=error msg="ContainerStatus for \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\": not found" Aug 13 00:29:33.724813 kubelet[3138]: E0813 00:29:33.724798 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\": not found" containerID="2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9" Aug 13 00:29:33.724847 kubelet[3138]: I0813 00:29:33.724827 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9"} err="failed to get container status \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2297f146a71647e423ceeab7dc1933742af11edf8a02aabe5299850dacf15df9\": not found" Aug 13 00:29:34.095612 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692-shm.mount: Deactivated successfully. Aug 13 00:29:34.095713 systemd[1]: var-lib-kubelet-pods-9039f2c4\x2d9a0e\x2d4479\x2dabae\x2d4ffb398676aa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnfhns.mount: Deactivated successfully. Aug 13 00:29:34.095770 systemd[1]: var-lib-kubelet-pods-10ce67f0\x2d3906\x2d4fa3\x2d9791\x2d529f46ef09da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvltm2.mount: Deactivated successfully. Aug 13 00:29:34.095832 systemd[1]: var-lib-kubelet-pods-10ce67f0\x2d3906\x2d4fa3\x2d9791\x2d529f46ef09da-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:29:34.095891 systemd[1]: var-lib-kubelet-pods-10ce67f0\x2d3906\x2d4fa3\x2d9791\x2d529f46ef09da-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:29:35.110061 sshd[4697]: Connection closed by 10.200.16.10 port 42376 Aug 13 00:29:35.110601 sshd-session[4695]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:35.113285 systemd[1]: sshd@23-10.200.8.5:22-10.200.16.10:42376.service: Deactivated successfully. Aug 13 00:29:35.115148 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:29:35.117082 systemd-logind[1696]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:29:35.117986 systemd-logind[1696]: Removed session 26. Aug 13 00:29:35.227936 systemd[1]: Started sshd@24-10.200.8.5:22-10.200.16.10:42384.service - OpenSSH per-connection server daemon (10.200.16.10:42384). Aug 13 00:29:35.357276 kubelet[3138]: I0813 00:29:35.357244 3138 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ce67f0-3906-4fa3-9791-529f46ef09da" path="/var/lib/kubelet/pods/10ce67f0-3906-4fa3-9791-529f46ef09da/volumes" Aug 13 00:29:35.357745 kubelet[3138]: I0813 00:29:35.357715 3138 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9039f2c4-9a0e-4479-abae-4ffb398676aa" path="/var/lib/kubelet/pods/9039f2c4-9a0e-4479-abae-4ffb398676aa/volumes" Aug 13 00:29:35.858784 sshd[4850]: Accepted publickey for core from 10.200.16.10 port 42384 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:35.859875 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:35.864117 systemd-logind[1696]: New session 27 of user core. Aug 13 00:29:35.874192 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:29:36.595247 kubelet[3138]: E0813 00:29:36.595077 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9039f2c4-9a0e-4479-abae-4ffb398676aa" containerName="cilium-operator" Aug 13 00:29:36.595247 kubelet[3138]: E0813 00:29:36.595108 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10ce67f0-3906-4fa3-9791-529f46ef09da" containerName="mount-cgroup" Aug 13 00:29:36.595247 kubelet[3138]: E0813 00:29:36.595116 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10ce67f0-3906-4fa3-9791-529f46ef09da" containerName="apply-sysctl-overwrites" Aug 13 00:29:36.595247 kubelet[3138]: E0813 00:29:36.595123 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10ce67f0-3906-4fa3-9791-529f46ef09da" containerName="mount-bpf-fs" Aug 13 00:29:36.595247 kubelet[3138]: E0813 00:29:36.595129 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10ce67f0-3906-4fa3-9791-529f46ef09da" containerName="clean-cilium-state" Aug 13 00:29:36.595247 kubelet[3138]: E0813 00:29:36.595136 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10ce67f0-3906-4fa3-9791-529f46ef09da" containerName="cilium-agent" Aug 13 00:29:36.596874 kubelet[3138]: I0813 00:29:36.596069 3138 memory_manager.go:354] "RemoveStaleState removing state" podUID="9039f2c4-9a0e-4479-abae-4ffb398676aa" containerName="cilium-operator" Aug 13 00:29:36.596874 kubelet[3138]: I0813 00:29:36.596185 3138 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ce67f0-3906-4fa3-9791-529f46ef09da" containerName="cilium-agent" Aug 13 00:29:36.606796 systemd[1]: Created slice kubepods-burstable-pod67c8a11c_6e72_405e_a0ec_7e2a5400975c.slice - libcontainer container kubepods-burstable-pod67c8a11c_6e72_405e_a0ec_7e2a5400975c.slice. Aug 13 00:29:36.699211 kubelet[3138]: I0813 00:29:36.699167 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-cilium-cgroup\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699311 kubelet[3138]: I0813 00:29:36.699218 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-cni-path\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699311 kubelet[3138]: I0813 00:29:36.699237 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-xtables-lock\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699311 kubelet[3138]: I0813 00:29:36.699266 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-host-proc-sys-net\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699311 kubelet[3138]: I0813 00:29:36.699284 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-bpf-maps\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699311 kubelet[3138]: I0813 00:29:36.699301 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-host-proc-sys-kernel\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699311 kubelet[3138]: I0813 00:29:36.699317 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-etc-cni-netd\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699574 kubelet[3138]: I0813 00:29:36.699335 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67c8a11c-6e72-405e-a0ec-7e2a5400975c-clustermesh-secrets\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699574 kubelet[3138]: I0813 00:29:36.699359 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-cilium-run\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699574 kubelet[3138]: I0813 00:29:36.699385 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67c8a11c-6e72-405e-a0ec-7e2a5400975c-cilium-config-path\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699574 kubelet[3138]: I0813 00:29:36.699402 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnbv\" (UniqueName: \"kubernetes.io/projected/67c8a11c-6e72-405e-a0ec-7e2a5400975c-kube-api-access-nbnbv\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699574 kubelet[3138]: I0813 00:29:36.699425 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-hostproc\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699574 kubelet[3138]: I0813 00:29:36.699445 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c8a11c-6e72-405e-a0ec-7e2a5400975c-lib-modules\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699672 kubelet[3138]: I0813 00:29:36.699462 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67c8a11c-6e72-405e-a0ec-7e2a5400975c-cilium-ipsec-secrets\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.699672 kubelet[3138]: I0813 00:29:36.699489 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67c8a11c-6e72-405e-a0ec-7e2a5400975c-hubble-tls\") pod \"cilium-jdvfr\" (UID: \"67c8a11c-6e72-405e-a0ec-7e2a5400975c\") " pod="kube-system/cilium-jdvfr" Aug 13 00:29:36.707491 sshd[4852]: Connection closed by 10.200.16.10 port 42384 Aug 13 00:29:36.707877 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:36.710431 systemd[1]: sshd@24-10.200.8.5:22-10.200.16.10:42384.service: Deactivated successfully. Aug 13 00:29:36.712573 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:29:36.715247 systemd-logind[1696]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:29:36.716727 systemd-logind[1696]: Removed session 27. Aug 13 00:29:36.837686 systemd[1]: Started sshd@25-10.200.8.5:22-10.200.16.10:42400.service - OpenSSH per-connection server daemon (10.200.16.10:42400). Aug 13 00:29:36.911639 containerd[1741]: time="2025-08-13T00:29:36.911605568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdvfr,Uid:67c8a11c-6e72-405e-a0ec-7e2a5400975c,Namespace:kube-system,Attempt:0,}" Aug 13 00:29:36.939415 containerd[1741]: time="2025-08-13T00:29:36.939327760Z" level=info msg="connecting to shim 0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77" address="unix:///run/containerd/s/32d318980a4d8d3c20ea11783d1229610d9a64eee2763bd78b33b17ca9188069" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:29:36.959194 systemd[1]: Started cri-containerd-0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77.scope - libcontainer container 0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77. Aug 13 00:29:36.977507 containerd[1741]: time="2025-08-13T00:29:36.977476952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdvfr,Uid:67c8a11c-6e72-405e-a0ec-7e2a5400975c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\"" Aug 13 00:29:36.979641 containerd[1741]: time="2025-08-13T00:29:36.979624732Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:29:36.989654 containerd[1741]: time="2025-08-13T00:29:36.989627089Z" level=info msg="Container e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:29:37.000183 containerd[1741]: time="2025-08-13T00:29:37.000158493Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df\"" Aug 13 00:29:37.001039 containerd[1741]: time="2025-08-13T00:29:37.000567982Z" level=info msg="StartContainer for \"e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df\"" Aug 13 00:29:37.001534 containerd[1741]: time="2025-08-13T00:29:37.001506917Z" level=info msg="connecting to shim e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df" address="unix:///run/containerd/s/32d318980a4d8d3c20ea11783d1229610d9a64eee2763bd78b33b17ca9188069" protocol=ttrpc version=3 Aug 13 00:29:37.018165 systemd[1]: Started cri-containerd-e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df.scope - libcontainer container e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df. Aug 13 00:29:37.041939 containerd[1741]: time="2025-08-13T00:29:37.041918482Z" level=info msg="StartContainer for \"e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df\" returns successfully" Aug 13 00:29:37.044677 systemd[1]: cri-containerd-e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df.scope: Deactivated successfully. Aug 13 00:29:37.046409 containerd[1741]: time="2025-08-13T00:29:37.046386992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df\" id:\"e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df\" pid:4927 exited_at:{seconds:1755044977 nanos:46050497}" Aug 13 00:29:37.046587 containerd[1741]: time="2025-08-13T00:29:37.046516323Z" level=info msg="received exit event container_id:\"e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df\" id:\"e4036d56b79aa04f7ae00a6ce5d1d51c9e17032c92e357e800ce3c6c1660a2df\" pid:4927 exited_at:{seconds:1755044977 nanos:46050497}" Aug 13 00:29:37.437455 kubelet[3138]: E0813 00:29:37.437411 3138 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:29:37.470094 sshd[4868]: Accepted publickey for core from 10.200.16.10 port 42400 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:37.470996 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:37.474763 systemd-logind[1696]: New session 28 of user core. Aug 13 00:29:37.482160 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:29:37.681019 containerd[1741]: time="2025-08-13T00:29:37.680978781Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:29:37.692978 containerd[1741]: time="2025-08-13T00:29:37.692911269Z" level=info msg="Container 16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:29:37.705635 containerd[1741]: time="2025-08-13T00:29:37.705588290Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a\"" Aug 13 00:29:37.706754 containerd[1741]: time="2025-08-13T00:29:37.706630402Z" level=info msg="StartContainer for \"16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a\"" Aug 13 00:29:37.707883 containerd[1741]: time="2025-08-13T00:29:37.707794687Z" level=info msg="connecting to shim 16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a" address="unix:///run/containerd/s/32d318980a4d8d3c20ea11783d1229610d9a64eee2763bd78b33b17ca9188069" protocol=ttrpc version=3 Aug 13 00:29:37.725178 systemd[1]: Started cri-containerd-16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a.scope - libcontainer container 16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a. Aug 13 00:29:37.750287 containerd[1741]: time="2025-08-13T00:29:37.750266631Z" level=info msg="StartContainer for \"16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a\" returns successfully" Aug 13 00:29:37.752080 systemd[1]: cri-containerd-16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a.scope: Deactivated successfully. Aug 13 00:29:37.754495 containerd[1741]: time="2025-08-13T00:29:37.754468861Z" level=info msg="received exit event container_id:\"16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a\" id:\"16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a\" pid:4975 exited_at:{seconds:1755044977 nanos:754110791}" Aug 13 00:29:37.754746 containerd[1741]: time="2025-08-13T00:29:37.754701333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a\" id:\"16449129c38e5f106853a9532eb47bac4f926b0dea8bc68724ce577f06917a4a\" pid:4975 exited_at:{seconds:1755044977 nanos:754110791}" Aug 13 00:29:37.909786 sshd[4962]: Connection closed by 10.200.16.10 port 42400 Aug 13 00:29:37.910254 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:37.912428 systemd[1]: sshd@25-10.200.8.5:22-10.200.16.10:42400.service: Deactivated successfully. Aug 13 00:29:37.914099 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:29:37.915698 systemd-logind[1696]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:29:37.916504 systemd-logind[1696]: Removed session 28. Aug 13 00:29:38.030303 systemd[1]: Started sshd@26-10.200.8.5:22-10.200.16.10:42406.service - OpenSSH per-connection server daemon (10.200.16.10:42406). Aug 13 00:29:38.659265 sshd[5015]: Accepted publickey for core from 10.200.16.10 port 42406 ssh2: RSA SHA256:j7p4XQXWFlakDCpIugyDQaaIj3GWiUw3GsrDiBCoheU Aug 13 00:29:38.660326 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:38.664719 systemd-logind[1696]: New session 29 of user core. Aug 13 00:29:38.669170 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:29:38.685596 containerd[1741]: time="2025-08-13T00:29:38.685236607Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:29:38.706769 containerd[1741]: time="2025-08-13T00:29:38.706739342Z" level=info msg="Container 5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:29:38.720099 containerd[1741]: time="2025-08-13T00:29:38.720071873Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3\"" Aug 13 00:29:38.720405 containerd[1741]: time="2025-08-13T00:29:38.720387499Z" level=info msg="StartContainer for \"5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3\"" Aug 13 00:29:38.722114 containerd[1741]: time="2025-08-13T00:29:38.721998827Z" level=info msg="connecting to shim 5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3" address="unix:///run/containerd/s/32d318980a4d8d3c20ea11783d1229610d9a64eee2763bd78b33b17ca9188069" protocol=ttrpc version=3 Aug 13 00:29:38.744575 systemd[1]: Started cri-containerd-5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3.scope - libcontainer container 5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3. Aug 13 00:29:38.769644 systemd[1]: cri-containerd-5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3.scope: Deactivated successfully. Aug 13 00:29:38.773592 containerd[1741]: time="2025-08-13T00:29:38.771975270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3\" id:\"5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3\" pid:5031 exited_at:{seconds:1755044978 nanos:771784340}" Aug 13 00:29:38.775268 containerd[1741]: time="2025-08-13T00:29:38.775229534Z" level=info msg="received exit event container_id:\"5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3\" id:\"5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3\" pid:5031 exited_at:{seconds:1755044978 nanos:771784340}" Aug 13 00:29:38.781917 containerd[1741]: time="2025-08-13T00:29:38.781864889Z" level=info msg="StartContainer for \"5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3\" returns successfully" Aug 13 00:29:38.805678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5970d4e4cfe898f5fd71df4b048b5eed249f184f4765656910f093b64ff55fb3-rootfs.mount: Deactivated successfully. Aug 13 00:29:39.690045 containerd[1741]: time="2025-08-13T00:29:39.688352503Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:29:39.703329 containerd[1741]: time="2025-08-13T00:29:39.702707990Z" level=info msg="Container 0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:29:39.715601 containerd[1741]: time="2025-08-13T00:29:39.715574150Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415\"" Aug 13 00:29:39.716853 containerd[1741]: time="2025-08-13T00:29:39.716181449Z" level=info msg="StartContainer for \"0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415\"" Aug 13 00:29:39.718045 containerd[1741]: time="2025-08-13T00:29:39.717928099Z" level=info msg="connecting to shim 0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415" address="unix:///run/containerd/s/32d318980a4d8d3c20ea11783d1229610d9a64eee2763bd78b33b17ca9188069" protocol=ttrpc version=3 Aug 13 00:29:39.738140 systemd[1]: Started cri-containerd-0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415.scope - libcontainer container 0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415. Aug 13 00:29:39.759852 systemd[1]: cri-containerd-0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415.scope: Deactivated successfully. Aug 13 00:29:39.761472 containerd[1741]: time="2025-08-13T00:29:39.761455556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415\" id:\"0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415\" pid:5076 exited_at:{seconds:1755044979 nanos:761233955}" Aug 13 00:29:39.762408 containerd[1741]: time="2025-08-13T00:29:39.762299667Z" level=info msg="received exit event container_id:\"0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415\" id:\"0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415\" pid:5076 exited_at:{seconds:1755044979 nanos:761233955}" Aug 13 00:29:39.769541 containerd[1741]: time="2025-08-13T00:29:39.769453922Z" level=info msg="StartContainer for \"0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415\" returns successfully" Aug 13 00:29:39.781407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dfebfe1652eb78de35010495d0ae94a63e950b48991b015bb759195c952f415-rootfs.mount: Deactivated successfully. Aug 13 00:29:40.694190 containerd[1741]: time="2025-08-13T00:29:40.694094866Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:29:40.712037 containerd[1741]: time="2025-08-13T00:29:40.711977893Z" level=info msg="Container ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:29:40.723825 containerd[1741]: time="2025-08-13T00:29:40.723798730Z" level=info msg="CreateContainer within sandbox \"0a46ee64ae35a3e4ed9a536bb108c2beb1b9b0a6e3b7bbf7692a66fbcffa6c77\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b\"" Aug 13 00:29:40.724617 containerd[1741]: time="2025-08-13T00:29:40.724184857Z" level=info msg="StartContainer for \"ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b\"" Aug 13 00:29:40.725283 containerd[1741]: time="2025-08-13T00:29:40.725237336Z" level=info msg="connecting to shim ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b" address="unix:///run/containerd/s/32d318980a4d8d3c20ea11783d1229610d9a64eee2763bd78b33b17ca9188069" protocol=ttrpc version=3 Aug 13 00:29:40.744321 systemd[1]: Started cri-containerd-ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b.scope - libcontainer container ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b. Aug 13 00:29:40.775026 containerd[1741]: time="2025-08-13T00:29:40.774995556Z" level=info msg="StartContainer for \"ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b\" returns successfully" Aug 13 00:29:40.824468 containerd[1741]: time="2025-08-13T00:29:40.824444202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b\" id:\"bb89ba5729faee464a9605d581d02f6d7f8b72008360a11274cc99b14641ff35\" pid:5146 exited_at:{seconds:1755044980 nanos:824255045}" Aug 13 00:29:40.905638 kubelet[3138]: I0813 00:29:40.905592 3138 setters.go:600] "Node became not ready" node="ci-4372.1.0-a-4baf804050" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:29:40Z","lastTransitionTime":"2025-08-13T00:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:29:41.022031 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Aug 13 00:29:41.711216 kubelet[3138]: I0813 00:29:41.711160 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jdvfr" podStartSLOduration=5.711141456 podStartE2EDuration="5.711141456s" podCreationTimestamp="2025-08-13 00:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:29:41.710511647 +0000 UTC m=+174.444815461" watchObservedRunningTime="2025-08-13 00:29:41.711141456 +0000 UTC m=+174.445445309" Aug 13 00:29:43.152423 containerd[1741]: time="2025-08-13T00:29:43.152362805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b\" id:\"ecab99666df67118a3a31659dccdd7a479388c5761e95ee54fb7fb7129b6d743\" pid:5550 exit_status:1 exited_at:{seconds:1755044983 nanos:151786541}" Aug 13 00:29:43.413660 systemd-networkd[1583]: lxc_health: Link UP Aug 13 00:29:43.420818 systemd-networkd[1583]: lxc_health: Gained carrier Aug 13 00:29:44.936228 systemd-networkd[1583]: lxc_health: Gained IPv6LL Aug 13 00:29:45.286469 containerd[1741]: time="2025-08-13T00:29:45.286351960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b\" id:\"0a9733010709aa5af6a487aa7ad350dd14c45486180275555d5c324bf5da8d7f\" pid:5685 exited_at:{seconds:1755044985 nanos:285632911}" Aug 13 00:29:47.359197 containerd[1741]: time="2025-08-13T00:29:47.359061454Z" level=info msg="StopPodSandbox for \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\"" Aug 13 00:29:47.359522 containerd[1741]: time="2025-08-13T00:29:47.359418012Z" level=info msg="TearDown network for sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" successfully" Aug 13 00:29:47.359522 containerd[1741]: time="2025-08-13T00:29:47.359432662Z" level=info msg="StopPodSandbox for \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" returns successfully" Aug 13 00:29:47.360624 containerd[1741]: time="2025-08-13T00:29:47.360592411Z" level=info msg="RemovePodSandbox for \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\"" Aug 13 00:29:47.360802 containerd[1741]: time="2025-08-13T00:29:47.360712672Z" level=info msg="Forcibly stopping sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\"" Aug 13 00:29:47.360893 containerd[1741]: time="2025-08-13T00:29:47.360882842Z" level=info msg="TearDown network for sandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" successfully" Aug 13 00:29:47.362193 containerd[1741]: time="2025-08-13T00:29:47.362170301Z" level=info msg="Ensure that sandbox 4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692 in task-service has been cleanup successfully" Aug 13 00:29:47.371968 containerd[1741]: time="2025-08-13T00:29:47.371883598Z" level=info msg="RemovePodSandbox \"4fa75df5fb704119a27e7715bd659888a00828c9c09425fd0de4c38a029a7692\" returns successfully" Aug 13 00:29:47.372718 containerd[1741]: time="2025-08-13T00:29:47.372681834Z" level=info msg="StopPodSandbox for \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\"" Aug 13 00:29:47.372994 containerd[1741]: time="2025-08-13T00:29:47.372976299Z" level=info msg="TearDown network for sandbox \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" successfully" Aug 13 00:29:47.373140 containerd[1741]: time="2025-08-13T00:29:47.373034357Z" level=info msg="StopPodSandbox for \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" returns successfully" Aug 13 00:29:47.373420 containerd[1741]: time="2025-08-13T00:29:47.373402328Z" level=info msg="RemovePodSandbox for \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\"" Aug 13 00:29:47.373527 containerd[1741]: time="2025-08-13T00:29:47.373427611Z" level=info msg="Forcibly stopping sandbox \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\"" Aug 13 00:29:47.373576 containerd[1741]: time="2025-08-13T00:29:47.373525004Z" level=info msg="TearDown network for sandbox \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" successfully" Aug 13 00:29:47.374593 containerd[1741]: time="2025-08-13T00:29:47.374568663Z" level=info msg="Ensure that sandbox d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451 in task-service has been cleanup successfully" Aug 13 00:29:47.381680 containerd[1741]: time="2025-08-13T00:29:47.381641222Z" level=info msg="RemovePodSandbox \"d7a16d99e4fc5c37b8610f3207d1cce8218bf78651e794530b63b2c9e5973451\" returns successfully" Aug 13 00:29:47.398562 containerd[1741]: time="2025-08-13T00:29:47.398530580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b\" id:\"c581fcdf099ef35f3ddbbc12b97a24d1445999f24e453204fe363defe073cc42\" pid:5720 exited_at:{seconds:1755044987 nanos:398185305}" Aug 13 00:29:49.476555 containerd[1741]: time="2025-08-13T00:29:49.476512574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff8a616491dca9e23422b86333e869fc58a2ddfe0a80c453b84058a49d36285b\" id:\"e5ecf7a860a34049e43d1675cfcb7f1ff7a360b87bad6d8ac5db1e17ab849b1b\" pid:5744 exited_at:{seconds:1755044989 nanos:476262306}" Aug 13 00:29:49.597112 sshd[5018]: Connection closed by 10.200.16.10 port 42406 Aug 13 00:29:49.597628 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:49.600689 systemd[1]: sshd@26-10.200.8.5:22-10.200.16.10:42406.service: Deactivated successfully. Aug 13 00:29:49.602417 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:29:49.603128 systemd-logind[1696]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:29:49.604196 systemd-logind[1696]: Removed session 29.