Jun 21 04:44:12.963887 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 04:44:12.963911 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:12.963921 kernel: BIOS-provided physical RAM map: Jun 21 04:44:12.963927 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 21 04:44:12.963933 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 21 04:44:12.963939 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 21 04:44:12.963948 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Jun 21 04:44:12.963954 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable Jun 21 04:44:12.963960 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data Jun 21 04:44:12.963966 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 21 04:44:12.963972 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 21 04:44:12.963978 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 21 04:44:12.963984 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 21 04:44:12.963990 kernel: NX (Execute Disable) protection: active Jun 21 04:44:12.964000 kernel: APIC: Static calls initialized Jun 21 04:44:12.964006 kernel: efi: EFI v2.7 by Microsoft Jun 21 04:44:12.964013 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebaca98 RNG=0x3ffd2018 Jun 21 04:44:12.964020 kernel: random: crng init done Jun 21 04:44:12.964026 kernel: secureboot: Secure boot disabled Jun 21 04:44:12.964032 kernel: SMBIOS 3.1.0 present. Jun 21 04:44:12.964039 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/21/2024 Jun 21 04:44:12.964045 kernel: DMI: Memory slots populated: 2/2 Jun 21 04:44:12.964052 kernel: Hypervisor detected: Microsoft Hyper-V Jun 21 04:44:12.964058 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 21 04:44:12.964065 kernel: Hyper-V: Nested features: 0x3e0101 Jun 21 04:44:12.964071 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 21 04:44:12.964077 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 21 04:44:12.964083 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 21 04:44:12.964088 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 21 04:44:12.964096 kernel: tsc: Detected 2300.000 MHz processor Jun 21 04:44:12.964106 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 04:44:12.964117 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 04:44:12.964126 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 21 04:44:12.964133 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 21 04:44:12.964140 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 04:44:12.964147 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 21 04:44:12.964175 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 21 04:44:12.964182 kernel: Using GB pages for direct mapping Jun 21 04:44:12.964188 kernel: ACPI: Early table checksum verification disabled Jun 21 04:44:12.964195 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 21 04:44:12.964205 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:12.964213 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:12.964220 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 21 04:44:12.964227 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 21 04:44:12.964234 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:12.964241 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:12.964249 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:12.964256 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 21 04:44:12.964263 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 21 04:44:12.964270 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:12.964277 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 21 04:44:12.964284 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Jun 21 04:44:12.964291 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 21 04:44:12.964298 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 21 04:44:12.964305 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 21 04:44:12.964313 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 21 04:44:12.964320 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 21 04:44:12.964326 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 21 04:44:12.964333 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 21 04:44:12.964340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 21 04:44:12.964347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 21 04:44:12.964354 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 21 04:44:12.964361 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 21 04:44:12.964368 kernel: Zone ranges: Jun 21 04:44:12.964376 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 04:44:12.964383 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 21 04:44:12.964390 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 21 04:44:12.964396 kernel: Device empty Jun 21 04:44:12.964403 kernel: Movable zone start for each node Jun 21 04:44:12.964410 kernel: Early memory node ranges Jun 21 04:44:12.964417 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 21 04:44:12.964423 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 21 04:44:12.964430 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] Jun 21 04:44:12.964438 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 21 04:44:12.964445 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 21 04:44:12.964452 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 21 04:44:12.964459 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 04:44:12.964466 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 21 04:44:12.964473 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Jun 21 04:44:12.964479 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges Jun 21 04:44:12.964486 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 21 04:44:12.964493 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 04:44:12.964502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 04:44:12.964509 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 04:44:12.964516 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 21 04:44:12.964522 kernel: TSC deadline timer available Jun 21 04:44:12.964529 kernel: CPU topo: Max. logical packages: 1 Jun 21 04:44:12.964536 kernel: CPU topo: Max. logical dies: 1 Jun 21 04:44:12.964543 kernel: CPU topo: Max. dies per package: 1 Jun 21 04:44:12.964549 kernel: CPU topo: Max. threads per core: 2 Jun 21 04:44:12.964556 kernel: CPU topo: Num. cores per package: 1 Jun 21 04:44:12.964564 kernel: CPU topo: Num. threads per package: 2 Jun 21 04:44:12.964571 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 04:44:12.964578 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 21 04:44:12.964585 kernel: Booting paravirtualized kernel on Hyper-V Jun 21 04:44:12.964592 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 04:44:12.964599 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 04:44:12.964606 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 04:44:12.964613 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 04:44:12.964619 kernel: pcpu-alloc: [0] 0 1 Jun 21 04:44:12.964628 kernel: Hyper-V: PV spinlocks enabled Jun 21 04:44:12.964635 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 21 04:44:12.964643 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:12.964650 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 04:44:12.964657 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 21 04:44:12.964664 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 04:44:12.964671 kernel: Fallback order for Node 0: 0 Jun 21 04:44:12.964678 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2096877 Jun 21 04:44:12.964686 kernel: Policy zone: Normal Jun 21 04:44:12.964693 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 04:44:12.964700 kernel: software IO TLB: area num 2. Jun 21 04:44:12.964707 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 04:44:12.964714 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 04:44:12.964721 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 04:44:12.964728 kernel: Dynamic Preempt: voluntary Jun 21 04:44:12.964735 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 04:44:12.964742 kernel: rcu: RCU event tracing is enabled. Jun 21 04:44:12.964751 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 04:44:12.964763 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 04:44:12.964771 kernel: Rude variant of Tasks RCU enabled. Jun 21 04:44:12.964779 kernel: Tracing variant of Tasks RCU enabled. Jun 21 04:44:12.964787 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 04:44:12.964794 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 04:44:12.964802 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:12.964810 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:12.964817 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:12.964825 kernel: Using NULL legacy PIC Jun 21 04:44:12.964832 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 21 04:44:12.964841 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 04:44:12.964849 kernel: Console: colour dummy device 80x25 Jun 21 04:44:12.964856 kernel: printk: legacy console [tty1] enabled Jun 21 04:44:12.964864 kernel: printk: legacy console [ttyS0] enabled Jun 21 04:44:12.964871 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 21 04:44:12.964879 kernel: ACPI: Core revision 20240827 Jun 21 04:44:12.964888 kernel: Failed to register legacy timer interrupt Jun 21 04:44:12.964895 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 04:44:12.964903 kernel: x2apic enabled Jun 21 04:44:12.964911 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 04:44:12.964918 kernel: Hyper-V: Host Build 10.0.26100.1255-1-0 Jun 21 04:44:12.964926 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 21 04:44:12.964933 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 21 04:44:12.964941 kernel: Hyper-V: Using IPI hypercalls Jun 21 04:44:12.964949 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 21 04:44:12.964957 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 21 04:44:12.964965 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 21 04:44:12.964973 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 21 04:44:12.964980 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 21 04:44:12.964988 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 21 04:44:12.964995 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 21 04:44:12.965003 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jun 21 04:44:12.965011 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 21 04:44:12.965018 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 21 04:44:12.965027 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 21 04:44:12.965034 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 04:44:12.965042 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 04:44:12.965049 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 04:44:12.965057 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 21 04:44:12.965064 kernel: RETBleed: Vulnerable Jun 21 04:44:12.965072 kernel: Speculative Store Bypass: Vulnerable Jun 21 04:44:12.965079 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 21 04:44:12.965086 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 04:44:12.965094 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 04:44:12.965101 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 04:44:12.965110 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 21 04:44:12.965117 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 21 04:44:12.965125 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 21 04:44:12.965132 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 21 04:44:12.965140 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 21 04:44:12.965147 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 21 04:44:12.965163 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 04:44:12.965171 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 21 04:44:12.965179 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 21 04:44:12.965186 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 21 04:44:12.965195 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 21 04:44:12.965203 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 21 04:44:12.965210 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 21 04:44:12.965218 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 21 04:44:12.965225 kernel: Freeing SMP alternatives memory: 32K Jun 21 04:44:12.965232 kernel: pid_max: default: 32768 minimum: 301 Jun 21 04:44:12.965240 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 04:44:12.965247 kernel: landlock: Up and running. Jun 21 04:44:12.965254 kernel: SELinux: Initializing. Jun 21 04:44:12.965262 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 04:44:12.965270 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 04:44:12.965277 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 21 04:44:12.965286 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 21 04:44:12.965294 kernel: signal: max sigframe size: 11952 Jun 21 04:44:12.965301 kernel: rcu: Hierarchical SRCU implementation. Jun 21 04:44:12.965309 kernel: rcu: Max phase no-delay instances is 400. Jun 21 04:44:12.965317 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 04:44:12.965325 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 21 04:44:12.965332 kernel: smp: Bringing up secondary CPUs ... Jun 21 04:44:12.965340 kernel: smpboot: x86: Booting SMP configuration: Jun 21 04:44:12.965347 kernel: .... node #0, CPUs: #1 Jun 21 04:44:12.965356 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 04:44:12.965364 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jun 21 04:44:12.965372 kernel: Memory: 8082312K/8387508K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299988K reserved, 0K cma-reserved) Jun 21 04:44:12.965380 kernel: devtmpfs: initialized Jun 21 04:44:12.965387 kernel: x86/mm: Memory block size: 128MB Jun 21 04:44:12.965395 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 21 04:44:12.965403 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 04:44:12.965411 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 04:44:12.965419 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 04:44:12.965428 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 04:44:12.965436 kernel: audit: initializing netlink subsys (disabled) Jun 21 04:44:12.965444 kernel: audit: type=2000 audit(1750481050.029:1): state=initialized audit_enabled=0 res=1 Jun 21 04:44:12.965451 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 04:44:12.965459 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 04:44:12.965466 kernel: cpuidle: using governor menu Jun 21 04:44:12.965474 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 04:44:12.965482 kernel: dca service started, version 1.12.1 Jun 21 04:44:12.965489 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 21 04:44:12.965498 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] Jun 21 04:44:12.965506 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 04:44:12.965514 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 04:44:12.965521 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 04:44:12.965529 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 04:44:12.965536 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 04:44:12.965544 kernel: ACPI: Added _OSI(Module Device) Jun 21 04:44:12.965551 kernel: ACPI: Added _OSI(Processor Device) Jun 21 04:44:12.965559 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 04:44:12.965568 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 04:44:12.965575 kernel: ACPI: Interpreter enabled Jun 21 04:44:12.965583 kernel: ACPI: PM: (supports S0 S5) Jun 21 04:44:12.965590 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 04:44:12.965598 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 04:44:12.965606 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 21 04:44:12.965614 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 21 04:44:12.965621 kernel: iommu: Default domain type: Translated Jun 21 04:44:12.965629 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 04:44:12.965637 kernel: efivars: Registered efivars operations Jun 21 04:44:12.965645 kernel: PCI: Using ACPI for IRQ routing Jun 21 04:44:12.965652 kernel: PCI: System does not support PCI Jun 21 04:44:12.965660 kernel: vgaarb: loaded Jun 21 04:44:12.965667 kernel: clocksource: Switched to clocksource tsc-early Jun 21 04:44:12.965675 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 04:44:12.965683 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 04:44:12.965690 kernel: pnp: PnP ACPI init Jun 21 04:44:12.965698 kernel: pnp: PnP ACPI: found 3 devices Jun 21 04:44:12.965707 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 04:44:12.965715 kernel: NET: Registered PF_INET protocol family Jun 21 04:44:12.965723 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 21 04:44:12.965730 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 21 04:44:12.965738 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 04:44:12.965746 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 04:44:12.965753 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 21 04:44:12.965761 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 21 04:44:12.965769 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 21 04:44:12.965778 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 21 04:44:12.965785 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 04:44:12.965793 kernel: NET: Registered PF_XDP protocol family Jun 21 04:44:12.965800 kernel: PCI: CLS 0 bytes, default 64 Jun 21 04:44:12.965808 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 21 04:44:12.965815 kernel: software IO TLB: mapped [mem 0x000000003aa59000-0x000000003ea59000] (64MB) Jun 21 04:44:12.965823 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 21 04:44:12.965831 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 21 04:44:12.965838 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 21 04:44:12.965847 kernel: clocksource: Switched to clocksource tsc Jun 21 04:44:12.965855 kernel: Initialise system trusted keyrings Jun 21 04:44:12.965862 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 21 04:44:12.965870 kernel: Key type asymmetric registered Jun 21 04:44:12.965877 kernel: Asymmetric key parser 'x509' registered Jun 21 04:44:12.965885 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 04:44:12.965893 kernel: io scheduler mq-deadline registered Jun 21 04:44:12.965901 kernel: io scheduler kyber registered Jun 21 04:44:12.965908 kernel: io scheduler bfq registered Jun 21 04:44:12.965917 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 04:44:12.965925 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 04:44:12.965933 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 04:44:12.965941 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 21 04:44:12.965949 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 04:44:12.965957 kernel: i8042: PNP: No PS/2 controller found. Jun 21 04:44:12.966081 kernel: rtc_cmos 00:02: registered as rtc0 Jun 21 04:44:12.966149 kernel: rtc_cmos 00:02: setting system clock to 2025-06-21T04:44:12 UTC (1750481052) Jun 21 04:44:12.966223 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 21 04:44:12.966232 kernel: intel_pstate: Intel P-state driver initializing Jun 21 04:44:12.966240 kernel: efifb: probing for efifb Jun 21 04:44:12.966248 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 21 04:44:12.966256 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 21 04:44:12.966264 kernel: efifb: scrolling: redraw Jun 21 04:44:12.966272 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 21 04:44:12.966280 kernel: Console: switching to colour frame buffer device 128x48 Jun 21 04:44:12.966289 kernel: fb0: EFI VGA frame buffer device Jun 21 04:44:12.966297 kernel: pstore: Using crash dump compression: deflate Jun 21 04:44:12.966305 kernel: pstore: Registered efi_pstore as persistent store backend Jun 21 04:44:12.966312 kernel: NET: Registered PF_INET6 protocol family Jun 21 04:44:12.966320 kernel: Segment Routing with IPv6 Jun 21 04:44:12.966328 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 04:44:12.966336 kernel: NET: Registered PF_PACKET protocol family Jun 21 04:44:12.966344 kernel: Key type dns_resolver registered Jun 21 04:44:12.966352 kernel: IPI shorthand broadcast: enabled Jun 21 04:44:12.966361 kernel: sched_clock: Marking stable (2870004531, 89265314)->(3276010182, -316740337) Jun 21 04:44:12.966369 kernel: registered taskstats version 1 Jun 21 04:44:12.966377 kernel: Loading compiled-in X.509 certificates Jun 21 04:44:12.966385 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 04:44:12.966393 kernel: Demotion targets for Node 0: null Jun 21 04:44:12.966400 kernel: Key type .fscrypt registered Jun 21 04:44:12.966408 kernel: Key type fscrypt-provisioning registered Jun 21 04:44:12.966416 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 04:44:12.966424 kernel: ima: Allocated hash algorithm: sha1 Jun 21 04:44:12.966433 kernel: ima: No architecture policies found Jun 21 04:44:12.966441 kernel: clk: Disabling unused clocks Jun 21 04:44:12.966449 kernel: Warning: unable to open an initial console. Jun 21 04:44:12.966456 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 04:44:12.966464 kernel: Write protecting the kernel read-only data: 24576k Jun 21 04:44:12.966472 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 04:44:12.966480 kernel: Run /init as init process Jun 21 04:44:12.966488 kernel: with arguments: Jun 21 04:44:12.966495 kernel: /init Jun 21 04:44:12.966505 kernel: with environment: Jun 21 04:44:12.966512 kernel: HOME=/ Jun 21 04:44:12.966520 kernel: TERM=linux Jun 21 04:44:12.966528 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 04:44:12.966537 systemd[1]: Successfully made /usr/ read-only. Jun 21 04:44:12.966548 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 04:44:12.966557 systemd[1]: Detected virtualization microsoft. Jun 21 04:44:12.966565 systemd[1]: Detected architecture x86-64. Jun 21 04:44:12.966575 systemd[1]: Running in initrd. Jun 21 04:44:12.966583 systemd[1]: No hostname configured, using default hostname. Jun 21 04:44:12.966591 systemd[1]: Hostname set to . Jun 21 04:44:12.966600 systemd[1]: Initializing machine ID from random generator. Jun 21 04:44:12.966608 systemd[1]: Queued start job for default target initrd.target. Jun 21 04:44:12.966616 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:44:12.966625 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:44:12.966634 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 04:44:12.966644 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 04:44:12.966653 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 04:44:12.966661 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 04:44:12.966670 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 04:44:12.966677 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 04:44:12.966686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:44:12.966695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:44:12.966704 systemd[1]: Reached target paths.target - Path Units. Jun 21 04:44:12.966712 systemd[1]: Reached target slices.target - Slice Units. Jun 21 04:44:12.966720 systemd[1]: Reached target swap.target - Swaps. Jun 21 04:44:12.966729 systemd[1]: Reached target timers.target - Timer Units. Jun 21 04:44:12.966738 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 04:44:12.966746 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 04:44:12.966755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 04:44:12.966763 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 04:44:12.966773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:44:12.966782 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 04:44:12.966790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:44:12.966798 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 04:44:12.966807 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 04:44:12.966815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 04:44:12.966824 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 04:44:12.966832 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 04:44:12.966842 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 04:44:12.966851 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 04:44:12.966859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 04:44:12.966868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:12.966897 systemd-journald[205]: Collecting audit messages is disabled. Jun 21 04:44:12.966923 systemd-journald[205]: Journal started Jun 21 04:44:12.966944 systemd-journald[205]: Runtime Journal (/run/log/journal/535e9380c56e4e1b851fc097f5d31c2d) is 8M, max 159M, 151M free. Jun 21 04:44:12.969362 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 04:44:12.971409 systemd-modules-load[207]: Inserted module 'overlay' Jun 21 04:44:12.977502 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 04:44:12.979445 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:44:12.985207 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 04:44:12.990263 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 04:44:12.992977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 04:44:13.007176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:13.012982 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 04:44:13.010104 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 04:44:13.017976 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 04:44:13.023227 kernel: Bridge firewalling registered Jun 21 04:44:13.023144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:44:13.024363 systemd-modules-load[207]: Inserted module 'br_netfilter' Jun 21 04:44:13.026991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 04:44:13.031373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 04:44:13.042867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:44:13.050032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 04:44:13.054486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:44:13.058213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 04:44:13.063249 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 04:44:13.069356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:44:13.073921 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 04:44:13.094883 systemd-resolved[243]: Positive Trust Anchors: Jun 21 04:44:13.094899 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 04:44:13.094930 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 04:44:13.112554 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:13.101254 systemd-resolved[243]: Defaulting to hostname 'linux'. Jun 21 04:44:13.107490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 04:44:13.115273 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:44:13.171181 kernel: SCSI subsystem initialized Jun 21 04:44:13.178166 kernel: Loading iSCSI transport class v2.0-870. Jun 21 04:44:13.186175 kernel: iscsi: registered transport (tcp) Jun 21 04:44:13.203174 kernel: iscsi: registered transport (qla4xxx) Jun 21 04:44:13.203209 kernel: QLogic iSCSI HBA Driver Jun 21 04:44:13.215003 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 04:44:13.224916 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:44:13.230630 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 04:44:13.257641 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 04:44:13.261083 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 04:44:13.307171 kernel: raid6: avx512x4 gen() 44447 MB/s Jun 21 04:44:13.325167 kernel: raid6: avx512x2 gen() 44034 MB/s Jun 21 04:44:13.342163 kernel: raid6: avx512x1 gen() 26770 MB/s Jun 21 04:44:13.360163 kernel: raid6: avx2x4 gen() 38603 MB/s Jun 21 04:44:13.377165 kernel: raid6: avx2x2 gen() 39011 MB/s Jun 21 04:44:13.394768 kernel: raid6: avx2x1 gen() 31283 MB/s Jun 21 04:44:13.394786 kernel: raid6: using algorithm avx512x4 gen() 44447 MB/s Jun 21 04:44:13.412373 kernel: raid6: .... xor() 7493 MB/s, rmw enabled Jun 21 04:44:13.412462 kernel: raid6: using avx512x2 recovery algorithm Jun 21 04:44:13.430174 kernel: xor: automatically using best checksumming function avx Jun 21 04:44:13.541185 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 04:44:13.545641 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 04:44:13.548827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:44:13.574871 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jun 21 04:44:13.578821 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:44:13.586066 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 04:44:13.602713 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jun 21 04:44:13.618928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 04:44:13.623985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 04:44:13.661348 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:44:13.665186 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 04:44:13.719165 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 04:44:13.728063 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:13.729921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:13.731824 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:13.738212 kernel: AES CTR mode by8 optimization enabled Jun 21 04:44:13.740311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:13.745583 kernel: hv_vmbus: Vmbus version:5.3 Jun 21 04:44:13.746263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:13.756481 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 21 04:44:13.756499 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 21 04:44:13.746337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:13.768735 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 21 04:44:13.768759 kernel: PTP clock support registered Jun 21 04:44:13.768659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:13.835784 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 21 04:44:13.835821 kernel: hv_utils: Registering HyperV Utility Driver Jun 21 04:44:13.835832 kernel: hv_vmbus: registering driver hv_utils Jun 21 04:44:13.835842 kernel: hv_utils: Shutdown IC version 3.2 Jun 21 04:44:13.838173 kernel: hv_utils: TimeSync IC version 4.0 Jun 21 04:44:14.167566 systemd-resolved[243]: Clock change detected. Flushing caches. Jun 21 04:44:14.169703 kernel: hv_utils: Heartbeat IC version 3.0 Jun 21 04:44:14.194298 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 21 04:44:14.196333 kernel: hv_vmbus: registering driver hv_storvsc Jun 21 04:44:14.196416 kernel: hv_vmbus: registering driver hid_hyperv Jun 21 04:44:14.198982 kernel: hv_vmbus: registering driver hv_pci Jun 21 04:44:14.199014 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 21 04:44:14.199026 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 21 04:44:14.204436 kernel: scsi host0: storvsc_host_t Jun 21 04:44:14.208525 kernel: hv_vmbus: registering driver hv_netvsc Jun 21 04:44:14.208561 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 21 04:44:14.209759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:14.217283 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 21 04:44:14.225275 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 21 04:44:14.225468 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa4fe (unnamed net_device) (uninitialized): VF slot 1 added Jun 21 04:44:14.225574 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 21 04:44:14.229064 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 21 04:44:14.244269 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 21 04:44:14.244316 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 21 04:44:14.244330 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 21 04:44:14.244453 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 21 04:44:14.248844 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 21 04:44:14.256405 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 21 04:44:14.265272 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 21 04:44:14.265384 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 21 04:44:14.276269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:44:14.286045 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 21 04:44:14.289123 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 21 04:44:14.299410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#0 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:44:14.552301 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 21 04:44:14.558278 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:14.926310 kernel: nvme nvme0: using unchecked data buffer Jun 21 04:44:15.123164 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 21 04:44:15.134981 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 21 04:44:15.169015 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 21 04:44:15.173072 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 21 04:44:15.183369 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 21 04:44:15.184007 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 04:44:15.184846 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 04:44:15.192650 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:44:15.195773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 04:44:15.203803 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 04:44:15.207106 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 04:44:15.227386 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 04:44:15.233274 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:15.269273 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 21 04:44:15.275296 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 21 04:44:15.280232 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 21 04:44:15.280375 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 21 04:44:15.301344 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 21 04:44:15.301390 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 21 04:44:15.301407 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 21 04:44:15.301425 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 21 04:44:15.323179 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 21 04:44:15.323345 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 21 04:44:15.323480 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 21 04:44:15.331185 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 21 04:44:15.342269 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jun 21 04:44:15.346291 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa4fe eth0: VF registering: eth1 Jun 21 04:44:15.350275 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jun 21 04:44:15.363275 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jun 21 04:44:16.244541 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:16.244604 disk-uuid[678]: The operation has completed successfully. Jun 21 04:44:16.290405 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 04:44:16.290506 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 04:44:16.328173 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 04:44:16.343330 sh[717]: Success Jun 21 04:44:16.370331 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 04:44:16.370372 kernel: device-mapper: uevent: version 1.0.3 Jun 21 04:44:16.371322 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 04:44:16.379315 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 21 04:44:16.585955 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 04:44:16.588133 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 04:44:16.601183 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 04:44:16.616267 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 04:44:16.619268 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (730) Jun 21 04:44:16.619296 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 04:44:16.621332 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:44:16.622482 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 04:44:16.953771 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 04:44:16.958685 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 04:44:16.961359 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 04:44:16.964296 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 04:44:16.974880 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 04:44:16.995273 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (753) Jun 21 04:44:17.002308 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:17.002349 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:44:17.002363 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:44:17.046945 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 04:44:17.052349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 04:44:17.066272 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:17.068000 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 04:44:17.074363 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 04:44:17.088209 systemd-networkd[893]: lo: Link UP Jun 21 04:44:17.088215 systemd-networkd[893]: lo: Gained carrier Jun 21 04:44:17.090847 systemd-networkd[893]: Enumeration completed Jun 21 04:44:17.095393 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 21 04:44:17.095569 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 21 04:44:17.091136 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 04:44:17.100941 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa4fe eth0: Data path switched to VF: enP30832s1 Jun 21 04:44:17.091174 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:17.091177 systemd-networkd[893]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:44:17.096793 systemd[1]: Reached target network.target - Network. Jun 21 04:44:17.101578 systemd-networkd[893]: enP30832s1: Link UP Jun 21 04:44:17.101634 systemd-networkd[893]: eth0: Link UP Jun 21 04:44:17.101727 systemd-networkd[893]: eth0: Gained carrier Jun 21 04:44:17.101736 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:17.104383 systemd-networkd[893]: enP30832s1: Gained carrier Jun 21 04:44:17.119280 systemd-networkd[893]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:44:18.145227 ignition[900]: Ignition 2.21.0 Jun 21 04:44:18.145240 ignition[900]: Stage: fetch-offline Jun 21 04:44:18.147021 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 04:44:18.145359 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:18.153368 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 04:44:18.145366 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:18.145461 ignition[900]: parsed url from cmdline: "" Jun 21 04:44:18.145463 ignition[900]: no config URL provided Jun 21 04:44:18.145468 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 04:44:18.145475 ignition[900]: no config at "/usr/lib/ignition/user.ign" Jun 21 04:44:18.145480 ignition[900]: failed to fetch config: resource requires networking Jun 21 04:44:18.145770 ignition[900]: Ignition finished successfully Jun 21 04:44:18.180691 ignition[909]: Ignition 2.21.0 Jun 21 04:44:18.180701 ignition[909]: Stage: fetch Jun 21 04:44:18.180873 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:18.180880 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:18.180944 ignition[909]: parsed url from cmdline: "" Jun 21 04:44:18.180947 ignition[909]: no config URL provided Jun 21 04:44:18.180951 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 04:44:18.180957 ignition[909]: no config at "/usr/lib/ignition/user.ign" Jun 21 04:44:18.180997 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 21 04:44:18.245711 ignition[909]: GET result: OK Jun 21 04:44:18.245769 ignition[909]: config has been read from IMDS userdata Jun 21 04:44:18.245792 ignition[909]: parsing config with SHA512: 30a7aca5e7a011d35e5905dac00ac1506cc17b8357b43fd78fc1515f53020791ee6db5a7845446f9e2aaa89b7d4446e386d110747be5e2627b2bb06ac7ef3d60 Jun 21 04:44:18.248763 unknown[909]: fetched base config from "system" Jun 21 04:44:18.249102 ignition[909]: fetch: fetch complete Jun 21 04:44:18.248769 unknown[909]: fetched base config from "system" Jun 21 04:44:18.249106 ignition[909]: fetch: fetch passed Jun 21 04:44:18.248773 unknown[909]: fetched user config from "azure" Jun 21 04:44:18.249139 ignition[909]: Ignition finished successfully Jun 21 04:44:18.251109 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 04:44:18.254148 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 04:44:18.278118 ignition[916]: Ignition 2.21.0 Jun 21 04:44:18.278340 ignition[916]: Stage: kargs Jun 21 04:44:18.278586 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:18.278594 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:18.281844 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 04:44:18.279822 ignition[916]: kargs: kargs passed Jun 21 04:44:18.286194 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 04:44:18.280077 ignition[916]: Ignition finished successfully Jun 21 04:44:18.304785 ignition[922]: Ignition 2.21.0 Jun 21 04:44:18.304794 ignition[922]: Stage: disks Jun 21 04:44:18.304982 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:18.308100 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 04:44:18.304990 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:18.312388 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 04:44:18.306111 ignition[922]: disks: disks passed Jun 21 04:44:18.316293 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 04:44:18.306148 ignition[922]: Ignition finished successfully Jun 21 04:44:18.320296 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 04:44:18.324286 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 04:44:18.327289 systemd[1]: Reached target basic.target - Basic System. Jun 21 04:44:18.331966 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 04:44:18.392409 systemd-fsck[931]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 21 04:44:18.395560 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 04:44:18.399978 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 04:44:18.409435 systemd-networkd[893]: eth0: Gained IPv6LL Jun 21 04:44:18.645267 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 04:44:18.646142 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 04:44:18.648285 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 04:44:18.666866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 04:44:18.683332 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 04:44:18.686958 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 21 04:44:18.689585 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 04:44:18.689614 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 04:44:18.703150 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (940) Jun 21 04:44:18.698772 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 04:44:18.711071 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:18.711107 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:44:18.711119 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:44:18.712065 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 04:44:18.719900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 04:44:18.729388 systemd-networkd[893]: enP30832s1: Gained IPv6LL Jun 21 04:44:19.296383 coreos-metadata[942]: Jun 21 04:44:19.296 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 21 04:44:19.311941 coreos-metadata[942]: Jun 21 04:44:19.311 INFO Fetch successful Jun 21 04:44:19.313191 coreos-metadata[942]: Jun 21 04:44:19.311 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 21 04:44:19.328157 coreos-metadata[942]: Jun 21 04:44:19.328 INFO Fetch successful Jun 21 04:44:19.342533 coreos-metadata[942]: Jun 21 04:44:19.342 INFO wrote hostname ci-4372.0.0-a-1fcff97c08 to /sysroot/etc/hostname Jun 21 04:44:19.345028 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 04:44:19.378327 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 04:44:19.396467 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Jun 21 04:44:19.400753 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 04:44:19.404374 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 04:44:20.197258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 04:44:20.201108 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 04:44:20.203223 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 04:44:20.223608 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 04:44:20.225865 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:20.242815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 04:44:20.251104 ignition[1059]: INFO : Ignition 2.21.0 Jun 21 04:44:20.251104 ignition[1059]: INFO : Stage: mount Jun 21 04:44:20.256783 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:20.256783 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:20.256783 ignition[1059]: INFO : mount: mount passed Jun 21 04:44:20.256783 ignition[1059]: INFO : Ignition finished successfully Jun 21 04:44:20.253292 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 04:44:20.254529 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 04:44:20.269752 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 04:44:20.287265 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1071) Jun 21 04:44:20.289385 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:44:20.289474 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:44:20.290504 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:44:20.295000 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 04:44:20.317619 ignition[1088]: INFO : Ignition 2.21.0 Jun 21 04:44:20.317619 ignition[1088]: INFO : Stage: files Jun 21 04:44:20.317619 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:20.317619 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:20.317619 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping Jun 21 04:44:20.332696 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 04:44:20.332696 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 04:44:20.361141 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 04:44:20.364317 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 04:44:20.364317 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 04:44:20.361489 unknown[1088]: wrote ssh authorized keys file for user: core Jun 21 04:44:20.375911 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 04:44:20.381330 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 21 04:44:20.422616 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 04:44:20.567832 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 04:44:20.567832 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 04:44:20.572399 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 21 04:44:21.167752 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 21 04:44:21.436856 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 04:44:21.436856 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 04:44:21.443349 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 04:44:21.471934 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 04:44:21.471934 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 04:44:21.471934 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 21 04:44:22.203972 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 21 04:44:22.782414 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 04:44:22.782414 ignition[1088]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 21 04:44:22.812391 ignition[1088]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 04:44:22.829624 ignition[1088]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 04:44:22.829624 ignition[1088]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 21 04:44:22.833012 ignition[1088]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 21 04:44:22.833012 ignition[1088]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 04:44:22.833012 ignition[1088]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 04:44:22.833012 ignition[1088]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 04:44:22.833012 ignition[1088]: INFO : files: files passed Jun 21 04:44:22.833012 ignition[1088]: INFO : Ignition finished successfully Jun 21 04:44:22.831894 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 04:44:22.849138 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 04:44:22.851496 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 04:44:22.872048 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 04:44:22.872192 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 04:44:22.889110 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:44:22.889110 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:44:22.897444 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:44:22.891804 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 04:44:22.894006 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 04:44:22.898464 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 04:44:22.927605 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 04:44:22.927693 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 04:44:22.932461 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 04:44:22.934824 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 04:44:22.937322 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 04:44:22.939383 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 04:44:22.968201 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 04:44:22.969692 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 04:44:22.984219 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:44:22.984597 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:44:22.984885 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 04:44:22.993448 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 04:44:22.993572 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 04:44:23.000684 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 04:44:23.002321 systemd[1]: Stopped target basic.target - Basic System. Jun 21 04:44:23.005220 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 04:44:23.009551 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 04:44:23.014395 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 04:44:23.015935 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 04:44:23.016465 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 04:44:23.017051 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 04:44:23.017645 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 04:44:23.018215 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 04:44:23.018573 systemd[1]: Stopped target swap.target - Swaps. Jun 21 04:44:23.018902 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 04:44:23.019023 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 04:44:23.019529 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:44:23.020073 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:44:23.020559 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 04:44:23.023171 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:44:23.051367 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 04:44:23.051506 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 04:44:23.054245 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 04:44:23.054378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 04:44:23.059420 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 04:44:23.059538 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 04:44:23.063407 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 21 04:44:23.063519 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 04:44:23.069865 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 04:44:23.080443 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 04:44:23.084837 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 04:44:23.085012 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:44:23.089094 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 04:44:23.096770 ignition[1142]: INFO : Ignition 2.21.0 Jun 21 04:44:23.096770 ignition[1142]: INFO : Stage: umount Jun 21 04:44:23.096770 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:44:23.096770 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:44:23.096770 ignition[1142]: INFO : umount: umount passed Jun 21 04:44:23.096770 ignition[1142]: INFO : Ignition finished successfully Jun 21 04:44:23.089215 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 04:44:23.102859 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 04:44:23.102941 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 04:44:23.109411 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 04:44:23.109471 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 04:44:23.111197 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 04:44:23.111278 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 04:44:23.118604 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 04:44:23.118643 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 04:44:23.121669 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 04:44:23.121697 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 04:44:23.125618 systemd[1]: Stopped target network.target - Network. Jun 21 04:44:23.129896 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 04:44:23.129973 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 04:44:23.132492 systemd[1]: Stopped target paths.target - Path Units. Jun 21 04:44:23.139290 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 04:44:23.145533 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:44:23.150313 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 04:44:23.152961 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 04:44:23.158680 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 04:44:23.158715 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 04:44:23.166332 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 04:44:23.166378 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 04:44:23.169755 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 04:44:23.169810 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 04:44:23.174490 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 04:44:23.174536 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 04:44:23.176653 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 04:44:23.181098 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 04:44:23.184995 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 04:44:23.191018 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 04:44:23.191673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 04:44:23.196972 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 04:44:23.197130 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 04:44:23.197207 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 04:44:23.201049 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 04:44:23.201444 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 04:44:23.207510 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 04:44:23.207548 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:44:23.219837 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 04:44:23.223454 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 04:44:23.223516 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 04:44:23.227636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 04:44:23.227671 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:44:23.233860 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 04:44:23.233899 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 04:44:23.237918 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 04:44:23.237958 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:44:23.246194 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:44:23.255678 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 04:44:23.255742 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 04:44:23.260471 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 04:44:23.260586 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:44:23.267406 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 04:44:23.267454 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 04:44:23.275849 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 04:44:23.275880 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:44:23.290344 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa4fe eth0: Data path switched from VF: enP30832s1 Jun 21 04:44:23.290512 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 21 04:44:23.277467 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 04:44:23.277503 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 04:44:23.277746 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 04:44:23.277776 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 04:44:23.278030 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 04:44:23.278059 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 04:44:23.287960 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 04:44:23.296070 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 04:44:23.296120 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:44:23.300430 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 04:44:23.300476 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:44:23.316405 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 21 04:44:23.316452 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 04:44:23.321841 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 04:44:23.321885 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:44:23.328501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:23.328727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:23.333077 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 21 04:44:23.333134 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jun 21 04:44:23.333167 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 21 04:44:23.333200 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 04:44:23.333522 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 04:44:23.333599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 04:44:23.338484 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 04:44:23.338551 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 04:44:23.433145 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 04:44:23.433231 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 04:44:23.436021 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 04:44:23.441322 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 04:44:23.441377 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 04:44:23.448182 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 04:44:23.459947 systemd[1]: Switching root. Jun 21 04:44:23.502704 systemd-journald[205]: Journal stopped Jun 21 04:44:27.099575 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jun 21 04:44:27.099604 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 04:44:27.099616 kernel: SELinux: policy capability open_perms=1 Jun 21 04:44:27.099624 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 04:44:27.099631 kernel: SELinux: policy capability always_check_network=0 Jun 21 04:44:27.099638 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 04:44:27.099649 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 04:44:27.099657 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 04:44:27.099665 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 04:44:27.099672 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 04:44:27.099680 kernel: audit: type=1403 audit(1750481064.794:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 04:44:27.099690 systemd[1]: Successfully loaded SELinux policy in 118.923ms. Jun 21 04:44:27.099699 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.372ms. Jun 21 04:44:27.099710 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 04:44:27.099719 systemd[1]: Detected virtualization microsoft. Jun 21 04:44:27.099728 systemd[1]: Detected architecture x86-64. Jun 21 04:44:27.099735 systemd[1]: Detected first boot. Jun 21 04:44:27.099744 systemd[1]: Hostname set to . Jun 21 04:44:27.099752 systemd[1]: Initializing machine ID from random generator. Jun 21 04:44:27.099761 zram_generator::config[1186]: No configuration found. Jun 21 04:44:27.099771 kernel: Guest personality initialized and is inactive Jun 21 04:44:27.099779 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 21 04:44:27.099787 kernel: Initialized host personality Jun 21 04:44:27.099795 kernel: NET: Registered PF_VSOCK protocol family Jun 21 04:44:27.099803 systemd[1]: Populated /etc with preset unit settings. Jun 21 04:44:27.099814 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 04:44:27.099822 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 04:44:27.099831 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 04:44:27.099840 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 04:44:27.099849 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 04:44:27.099859 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 04:44:27.099868 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 04:44:27.099878 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 04:44:27.099887 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 04:44:27.099895 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 04:44:27.099904 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 04:44:27.099913 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 04:44:27.099921 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:44:27.099930 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:44:27.099939 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 04:44:27.099950 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 04:44:27.099961 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 04:44:27.099970 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 04:44:27.099980 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 04:44:27.099988 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:44:27.099997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:44:27.100006 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 04:44:27.100015 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 04:44:27.100026 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 04:44:27.100035 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 04:44:27.100044 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:44:27.100053 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 04:44:27.100061 systemd[1]: Reached target slices.target - Slice Units. Jun 21 04:44:27.100070 systemd[1]: Reached target swap.target - Swaps. Jun 21 04:44:27.100079 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 04:44:27.100088 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 04:44:27.100101 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 04:44:27.100110 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:44:27.100119 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 04:44:27.100128 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:44:27.100137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 04:44:27.100147 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 04:44:27.100156 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 04:44:27.100165 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 04:44:27.100173 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:27.100183 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 04:44:27.100192 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 04:44:27.100201 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 04:44:27.100210 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 04:44:27.100220 systemd[1]: Reached target machines.target - Containers. Jun 21 04:44:27.100229 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 04:44:27.100238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:44:27.100247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 04:44:27.100287 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 04:44:27.100296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:44:27.100304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 04:44:27.100313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:44:27.100324 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 04:44:27.100333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:44:27.100342 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 04:44:27.100351 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 04:44:27.100360 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 04:44:27.100369 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 04:44:27.100378 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 04:44:27.100387 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:44:27.100398 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 04:44:27.100407 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 04:44:27.100417 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 04:44:27.100426 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 04:44:27.100435 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 04:44:27.100444 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 04:44:27.100453 kernel: loop: module loaded Jun 21 04:44:27.100461 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 04:44:27.100470 systemd[1]: Stopped verity-setup.service. Jun 21 04:44:27.100481 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:27.100490 kernel: fuse: init (API version 7.41) Jun 21 04:44:27.100499 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 04:44:27.100508 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 04:44:27.100517 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 04:44:27.100525 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 04:44:27.100552 systemd-journald[1283]: Collecting audit messages is disabled. Jun 21 04:44:27.100575 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 04:44:27.100585 systemd-journald[1283]: Journal started Jun 21 04:44:27.100606 systemd-journald[1283]: Runtime Journal (/run/log/journal/110759ccb1304565a6dfbd128c4eb50a) is 8M, max 159M, 151M free. Jun 21 04:44:26.678306 systemd[1]: Queued start job for default target multi-user.target. Jun 21 04:44:26.686712 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 21 04:44:26.687028 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 04:44:27.103467 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 04:44:27.104941 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 04:44:27.106197 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 04:44:27.107762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:44:27.110452 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 04:44:27.110589 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 04:44:27.112992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:44:27.113228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:44:27.116222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:44:27.116477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:44:27.118714 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 04:44:27.118851 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 04:44:27.121043 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:44:27.121365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:44:27.123642 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 04:44:27.126112 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:44:27.128735 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 04:44:27.131646 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 04:44:27.142232 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 04:44:27.148328 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 04:44:27.151803 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 04:44:27.154103 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 04:44:27.154133 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 04:44:27.157373 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 04:44:27.164364 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 04:44:27.166883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:44:27.169996 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 04:44:27.177396 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 04:44:27.179789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 04:44:27.194420 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 04:44:27.195274 kernel: ACPI: bus type drm_connector registered Jun 21 04:44:27.196063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 04:44:27.198371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:44:27.201427 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 04:44:27.209558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 04:44:27.212954 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 04:44:27.214330 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 04:44:27.222660 systemd-journald[1283]: Time spent on flushing to /var/log/journal/110759ccb1304565a6dfbd128c4eb50a is 14.327ms for 988 entries. Jun 21 04:44:27.222660 systemd-journald[1283]: System Journal (/var/log/journal/110759ccb1304565a6dfbd128c4eb50a) is 8M, max 2.6G, 2.6G free. Jun 21 04:44:27.266514 systemd-journald[1283]: Received client request to flush runtime journal. Jun 21 04:44:27.266546 kernel: loop0: detected capacity change from 0 to 146240 Jun 21 04:44:27.216798 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 04:44:27.219566 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:44:27.224928 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 04:44:27.236303 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 04:44:27.238768 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 04:44:27.243202 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 04:44:27.267457 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 04:44:27.277779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:44:27.298374 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 04:44:27.347635 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jun 21 04:44:27.347653 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jun 21 04:44:27.363073 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 04:44:27.365242 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 04:44:27.554553 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 04:44:27.558423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 04:44:27.576139 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jun 21 04:44:27.576156 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jun 21 04:44:27.578789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:44:27.611272 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 04:44:27.645270 kernel: loop1: detected capacity change from 0 to 113872 Jun 21 04:44:27.688309 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 04:44:27.960843 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 04:44:27.964293 kernel: loop2: detected capacity change from 0 to 224512 Jun 21 04:44:27.964842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:44:27.989615 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jun 21 04:44:28.026265 kernel: loop3: detected capacity change from 0 to 28496 Jun 21 04:44:28.230192 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:44:28.236670 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 04:44:28.293382 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 04:44:28.346687 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 04:44:28.362710 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 04:44:28.375269 kernel: loop4: detected capacity change from 0 to 146240 Jun 21 04:44:28.395268 kernel: loop5: detected capacity change from 0 to 113872 Jun 21 04:44:28.409264 kernel: loop6: detected capacity change from 0 to 224512 Jun 21 04:44:28.417282 kernel: hv_vmbus: registering driver hyperv_fb Jun 21 04:44:28.421483 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 21 04:44:28.421532 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 21 04:44:28.422265 kernel: Console: switching to colour dummy device 80x25 Jun 21 04:44:28.426417 kernel: Console: switching to colour frame buffer device 128x48 Jun 21 04:44:28.431272 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 04:44:28.434274 kernel: loop7: detected capacity change from 0 to 28496 Jun 21 04:44:28.452083 (sd-merge)[1392]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 21 04:44:28.454345 kernel: hv_vmbus: registering driver hv_balloon Jun 21 04:44:28.454203 (sd-merge)[1392]: Merged extensions into '/usr'. Jun 21 04:44:28.460294 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 21 04:44:28.461201 systemd[1]: Reload requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 04:44:28.461277 systemd[1]: Reloading... Jun 21 04:44:28.500655 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#115 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:44:28.512288 systemd-networkd[1365]: lo: Link UP Jun 21 04:44:28.512296 systemd-networkd[1365]: lo: Gained carrier Jun 21 04:44:28.516438 systemd-networkd[1365]: Enumeration completed Jun 21 04:44:28.518454 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:28.518530 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:44:28.522293 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 21 04:44:28.527270 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 21 04:44:28.531284 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d4aa4fe eth0: Data path switched to VF: enP30832s1 Jun 21 04:44:28.533403 systemd-networkd[1365]: enP30832s1: Link UP Jun 21 04:44:28.534457 systemd-networkd[1365]: eth0: Link UP Jun 21 04:44:28.536285 systemd-networkd[1365]: eth0: Gained carrier Jun 21 04:44:28.536308 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:28.542229 systemd-networkd[1365]: enP30832s1: Gained carrier Jun 21 04:44:28.549380 systemd-networkd[1365]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:44:28.577267 zram_generator::config[1446]: No configuration found. Jun 21 04:44:28.815267 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 21 04:44:28.814115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:44:28.910628 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 21 04:44:28.913842 systemd[1]: Reloading finished in 452 ms. Jun 21 04:44:28.935880 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 04:44:28.938551 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 04:44:28.970025 systemd[1]: Starting ensure-sysext.service... Jun 21 04:44:28.973570 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 04:44:28.978425 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 04:44:28.984469 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 04:44:28.988143 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 04:44:28.994045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:29.011960 systemd[1]: Reload requested from client PID 1525 ('systemctl') (unit ensure-sysext.service)... Jun 21 04:44:29.011980 systemd[1]: Reloading... Jun 21 04:44:29.017803 systemd-tmpfiles[1530]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 04:44:29.017843 systemd-tmpfiles[1530]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 04:44:29.018073 systemd-tmpfiles[1530]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 04:44:29.019195 systemd-tmpfiles[1530]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 04:44:29.019914 systemd-tmpfiles[1530]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 04:44:29.020124 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. Jun 21 04:44:29.020164 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. Jun 21 04:44:29.024157 systemd-tmpfiles[1530]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 04:44:29.024166 systemd-tmpfiles[1530]: Skipping /boot Jun 21 04:44:29.032349 systemd-tmpfiles[1530]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 04:44:29.032359 systemd-tmpfiles[1530]: Skipping /boot Jun 21 04:44:29.085286 zram_generator::config[1566]: No configuration found. Jun 21 04:44:29.159591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:44:29.245347 systemd[1]: Reloading finished in 233 ms. Jun 21 04:44:29.262943 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 04:44:29.263316 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 04:44:29.263579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:44:29.269739 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:44:29.272406 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 04:44:29.274434 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 04:44:29.279418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 04:44:29.281436 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 04:44:29.290550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:29.290688 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:44:29.292399 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:44:29.296346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:44:29.297220 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:44:29.297539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:44:29.297623 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:44:29.297701 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:29.302753 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:29.302912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:44:29.303061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:44:29.303145 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:44:29.303233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:29.311514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:44:29.311649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:44:29.312632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:29.312961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:44:29.317394 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 04:44:29.317572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:44:29.317679 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:44:29.317834 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 04:44:29.318088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:44:29.318777 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:44:29.319287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:44:29.329810 systemd[1]: Finished ensure-sysext.service. Jun 21 04:44:29.335220 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:44:29.335396 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:44:29.338713 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 04:44:29.339296 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 04:44:29.344686 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 04:44:29.344893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 04:44:29.347443 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 04:44:29.351623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:29.362362 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 04:44:29.411586 systemd-resolved[1632]: Positive Trust Anchors: Jun 21 04:44:29.411600 systemd-resolved[1632]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 04:44:29.411632 systemd-resolved[1632]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 04:44:29.413321 augenrules[1667]: No rules Jun 21 04:44:29.414124 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:44:29.414587 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:44:29.417029 systemd-resolved[1632]: Using system hostname 'ci-4372.0.0-a-1fcff97c08'. Jun 21 04:44:29.418438 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 04:44:29.421399 systemd[1]: Reached target network.target - Network. Jun 21 04:44:29.422507 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:44:29.705395 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 04:44:29.708483 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 04:44:29.929402 systemd-networkd[1365]: enP30832s1: Gained IPv6LL Jun 21 04:44:30.377386 systemd-networkd[1365]: eth0: Gained IPv6LL Jun 21 04:44:30.379379 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 04:44:30.381151 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 04:44:31.305882 ldconfig[1321]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 04:44:31.314955 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 04:44:31.317653 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 04:44:31.338443 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 04:44:31.341450 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 04:44:31.343022 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 04:44:31.344510 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 04:44:31.347312 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 04:44:31.348999 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 04:44:31.352348 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 04:44:31.355300 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 04:44:31.359302 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 04:44:31.359332 systemd[1]: Reached target paths.target - Path Units. Jun 21 04:44:31.361309 systemd[1]: Reached target timers.target - Timer Units. Jun 21 04:44:31.363836 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 04:44:31.368218 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 04:44:31.371242 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 04:44:31.374439 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 04:44:31.377315 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 04:44:31.398673 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 04:44:31.400131 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 04:44:31.404784 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 04:44:31.408850 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 04:44:31.410068 systemd[1]: Reached target basic.target - Basic System. Jun 21 04:44:31.411168 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 04:44:31.411188 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 04:44:31.412947 systemd[1]: Starting chronyd.service - NTP client/server... Jun 21 04:44:31.415016 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 04:44:31.419424 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 04:44:31.433431 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 04:44:31.439495 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 04:44:31.442322 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 04:44:31.449621 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 04:44:31.451671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 04:44:31.452628 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 04:44:31.456444 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 21 04:44:31.457548 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 21 04:44:31.459194 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 21 04:44:31.464764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:44:31.467948 jq[1688]: false Jun 21 04:44:31.470790 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 04:44:31.474043 KVP[1691]: KVP starting; pid is:1691 Jun 21 04:44:31.474403 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 04:44:31.481276 kernel: hv_utils: KVP IC version 4.0 Jun 21 04:44:31.481315 KVP[1691]: KVP LIC Version: 3.1 Jun 21 04:44:31.482872 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 04:44:31.486598 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 04:44:31.490116 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Refreshing passwd entry cache Jun 21 04:44:31.491519 oslogin_cache_refresh[1690]: Refreshing passwd entry cache Jun 21 04:44:31.492690 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 04:44:31.501754 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 04:44:31.504325 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 04:44:31.504716 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 04:44:31.507146 extend-filesystems[1689]: Found /dev/nvme0n1p6 Jun 21 04:44:31.508792 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 04:44:31.513499 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 04:44:31.519485 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Failure getting users, quitting Jun 21 04:44:31.519581 oslogin_cache_refresh[1690]: Failure getting users, quitting Jun 21 04:44:31.520036 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 04:44:31.520036 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Refreshing group entry cache Jun 21 04:44:31.519600 oslogin_cache_refresh[1690]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 04:44:31.519634 oslogin_cache_refresh[1690]: Refreshing group entry cache Jun 21 04:44:31.524294 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 04:44:31.526544 extend-filesystems[1689]: Found /dev/nvme0n1p9 Jun 21 04:44:31.528690 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 04:44:31.528862 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 04:44:31.537285 extend-filesystems[1689]: Checking size of /dev/nvme0n1p9 Jun 21 04:44:31.545347 oslogin_cache_refresh[1690]: Failure getting groups, quitting Jun 21 04:44:31.543665 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 04:44:31.546396 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Failure getting groups, quitting Jun 21 04:44:31.546396 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 04:44:31.545356 oslogin_cache_refresh[1690]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 04:44:31.543819 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 04:44:31.547027 (chronyd)[1680]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 21 04:44:31.557150 update_engine[1704]: I20250621 04:44:31.554935 1704 main.cc:92] Flatcar Update Engine starting Jun 21 04:44:31.547591 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 04:44:31.558327 jq[1707]: true Jun 21 04:44:31.547758 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 04:44:31.550065 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 04:44:31.550214 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 04:44:31.572795 chronyd[1732]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 21 04:44:31.583668 chronyd[1732]: Timezone right/UTC failed leap second check, ignoring Jun 21 04:44:31.583827 chronyd[1732]: Loaded seccomp filter (level 2) Jun 21 04:44:31.587560 systemd[1]: Started chronyd.service - NTP client/server. Jun 21 04:44:31.589542 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 04:44:31.595689 jq[1719]: true Jun 21 04:44:31.596676 extend-filesystems[1689]: Old size kept for /dev/nvme0n1p9 Jun 21 04:44:31.600382 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 04:44:31.600625 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 04:44:31.615575 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 04:44:31.625602 tar[1715]: linux-amd64/LICENSE Jun 21 04:44:31.626892 tar[1715]: linux-amd64/helm Jun 21 04:44:31.663986 systemd-logind[1702]: New seat seat0. Jun 21 04:44:31.666625 systemd-logind[1702]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 04:44:31.666765 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 04:44:31.670288 dbus-daemon[1683]: [system] SELinux support is enabled Jun 21 04:44:31.670393 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 04:44:31.675411 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 04:44:31.677732 dbus-daemon[1683]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 21 04:44:31.675440 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 04:44:31.679368 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 04:44:31.679387 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 04:44:31.685639 systemd[1]: Started update-engine.service - Update Engine. Jun 21 04:44:31.688537 update_engine[1704]: I20250621 04:44:31.685837 1704 update_check_scheduler.cc:74] Next update check in 4m42s Jun 21 04:44:31.690971 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 04:44:31.703189 bash[1764]: Updated "/home/core/.ssh/authorized_keys" Jun 21 04:44:31.704035 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 04:44:31.707232 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 21 04:44:31.792274 coreos-metadata[1682]: Jun 21 04:44:31.790 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 21 04:44:31.795229 coreos-metadata[1682]: Jun 21 04:44:31.795 INFO Fetch successful Jun 21 04:44:31.795858 coreos-metadata[1682]: Jun 21 04:44:31.795 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 21 04:44:31.802377 coreos-metadata[1682]: Jun 21 04:44:31.802 INFO Fetch successful Jun 21 04:44:31.803740 coreos-metadata[1682]: Jun 21 04:44:31.803 INFO Fetching http://168.63.129.16/machine/bebff91b-8e12-433a-b933-d4710ef0e974/691d07f5%2D37f2%2D4cf2%2Dba17%2D208255ace56a.%5Fci%2D4372.0.0%2Da%2D1fcff97c08?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 21 04:44:31.806359 coreos-metadata[1682]: Jun 21 04:44:31.806 INFO Fetch successful Jun 21 04:44:31.806575 coreos-metadata[1682]: Jun 21 04:44:31.806 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 21 04:44:31.815654 coreos-metadata[1682]: Jun 21 04:44:31.815 INFO Fetch successful Jun 21 04:44:31.880062 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 04:44:31.892482 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 04:44:31.960362 locksmithd[1773]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 04:44:32.333491 sshd_keygen[1742]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 04:44:32.361172 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 04:44:32.367500 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 04:44:32.372937 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 21 04:44:32.395888 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 04:44:32.396096 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 04:44:32.404554 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 04:44:32.421324 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 21 04:44:32.424940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 04:44:32.430544 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 04:44:32.436345 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 04:44:32.437962 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 04:44:32.502834 tar[1715]: linux-amd64/README.md Jun 21 04:44:32.517208 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 04:44:32.646811 containerd[1723]: time="2025-06-21T04:44:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 04:44:32.646811 containerd[1723]: time="2025-06-21T04:44:32.646751479Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 04:44:32.655888 containerd[1723]: time="2025-06-21T04:44:32.655422155Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.746µs" Jun 21 04:44:32.655888 containerd[1723]: time="2025-06-21T04:44:32.655454204Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 04:44:32.655888 containerd[1723]: time="2025-06-21T04:44:32.655473936Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 04:44:32.655888 containerd[1723]: time="2025-06-21T04:44:32.655593768Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 04:44:32.655888 containerd[1723]: time="2025-06-21T04:44:32.655605324Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 04:44:32.655888 containerd[1723]: time="2025-06-21T04:44:32.655626671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 04:44:32.655888 containerd[1723]: time="2025-06-21T04:44:32.655673762Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 04:44:32.655888 containerd[1723]: time="2025-06-21T04:44:32.655682881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 04:44:32.656092 containerd[1723]: time="2025-06-21T04:44:32.655946191Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 04:44:32.656092 containerd[1723]: time="2025-06-21T04:44:32.655957808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 04:44:32.656092 containerd[1723]: time="2025-06-21T04:44:32.655967935Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 04:44:32.656092 containerd[1723]: time="2025-06-21T04:44:32.655976486Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 04:44:32.656092 containerd[1723]: time="2025-06-21T04:44:32.656036255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 04:44:32.656714 containerd[1723]: time="2025-06-21T04:44:32.656189731Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 04:44:32.656714 containerd[1723]: time="2025-06-21T04:44:32.656211519Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 04:44:32.656714 containerd[1723]: time="2025-06-21T04:44:32.656220238Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 04:44:32.656714 containerd[1723]: time="2025-06-21T04:44:32.656289135Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 04:44:32.656714 containerd[1723]: time="2025-06-21T04:44:32.656519001Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 04:44:32.656714 containerd[1723]: time="2025-06-21T04:44:32.656562078Z" level=info msg="metadata content store policy set" policy=shared Jun 21 04:44:32.666558 containerd[1723]: time="2025-06-21T04:44:32.666524994Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 04:44:32.666615 containerd[1723]: time="2025-06-21T04:44:32.666567954Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 04:44:32.666615 containerd[1723]: time="2025-06-21T04:44:32.666585761Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 04:44:32.666615 containerd[1723]: time="2025-06-21T04:44:32.666597988Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 04:44:32.666615 containerd[1723]: time="2025-06-21T04:44:32.666609024Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 04:44:32.666694 containerd[1723]: time="2025-06-21T04:44:32.666618162Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 04:44:32.666694 containerd[1723]: time="2025-06-21T04:44:32.666630490Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 04:44:32.666694 containerd[1723]: time="2025-06-21T04:44:32.666641063Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 04:44:32.666694 containerd[1723]: time="2025-06-21T04:44:32.666650988Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 04:44:32.666694 containerd[1723]: time="2025-06-21T04:44:32.666660095Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 04:44:32.666694 containerd[1723]: time="2025-06-21T04:44:32.666669166Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 04:44:32.666694 containerd[1723]: time="2025-06-21T04:44:32.666680801Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 04:44:32.666801 containerd[1723]: time="2025-06-21T04:44:32.666779863Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 04:44:32.666801 containerd[1723]: time="2025-06-21T04:44:32.666794963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 04:44:32.666833 containerd[1723]: time="2025-06-21T04:44:32.666808289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 04:44:32.666833 containerd[1723]: time="2025-06-21T04:44:32.666818514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 04:44:32.666868 containerd[1723]: time="2025-06-21T04:44:32.666833432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 04:44:32.666868 containerd[1723]: time="2025-06-21T04:44:32.666843716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 04:44:32.666868 containerd[1723]: time="2025-06-21T04:44:32.666854351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 04:44:32.666868 containerd[1723]: time="2025-06-21T04:44:32.666863724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 04:44:32.666934 containerd[1723]: time="2025-06-21T04:44:32.666875405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 04:44:32.666934 containerd[1723]: time="2025-06-21T04:44:32.666884221Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 04:44:32.666934 containerd[1723]: time="2025-06-21T04:44:32.666893661Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 04:44:32.666982 containerd[1723]: time="2025-06-21T04:44:32.666948943Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 04:44:32.666982 containerd[1723]: time="2025-06-21T04:44:32.666960769Z" level=info msg="Start snapshots syncer" Jun 21 04:44:32.666982 containerd[1723]: time="2025-06-21T04:44:32.666979031Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 04:44:32.667189 containerd[1723]: time="2025-06-21T04:44:32.667158763Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 04:44:32.668366 containerd[1723]: time="2025-06-21T04:44:32.668337037Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 04:44:32.669872 containerd[1723]: time="2025-06-21T04:44:32.669838110Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 04:44:32.669967 containerd[1723]: time="2025-06-21T04:44:32.669951201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 04:44:32.669998 containerd[1723]: time="2025-06-21T04:44:32.669975584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 04:44:32.669998 containerd[1723]: time="2025-06-21T04:44:32.669986607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 04:44:32.670032 containerd[1723]: time="2025-06-21T04:44:32.669996548Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 04:44:32.670032 containerd[1723]: time="2025-06-21T04:44:32.670007898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 04:44:32.670032 containerd[1723]: time="2025-06-21T04:44:32.670018025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 04:44:32.670032 containerd[1723]: time="2025-06-21T04:44:32.670028386Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 04:44:32.670096 containerd[1723]: time="2025-06-21T04:44:32.670060468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 04:44:32.670096 containerd[1723]: time="2025-06-21T04:44:32.670071789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 04:44:32.670096 containerd[1723]: time="2025-06-21T04:44:32.670081866Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 04:44:32.670149 containerd[1723]: time="2025-06-21T04:44:32.670109275Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 04:44:32.670149 containerd[1723]: time="2025-06-21T04:44:32.670121849Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 04:44:32.670149 containerd[1723]: time="2025-06-21T04:44:32.670129560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 04:44:32.670149 containerd[1723]: time="2025-06-21T04:44:32.670138118Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 04:44:32.670149 containerd[1723]: time="2025-06-21T04:44:32.670145700Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 04:44:32.670233 containerd[1723]: time="2025-06-21T04:44:32.670153920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 04:44:32.670233 containerd[1723]: time="2025-06-21T04:44:32.670172413Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 04:44:32.670233 containerd[1723]: time="2025-06-21T04:44:32.670185946Z" level=info msg="runtime interface created" Jun 21 04:44:32.670233 containerd[1723]: time="2025-06-21T04:44:32.670190479Z" level=info msg="created NRI interface" Jun 21 04:44:32.670233 containerd[1723]: time="2025-06-21T04:44:32.670198167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 04:44:32.670233 containerd[1723]: time="2025-06-21T04:44:32.670210854Z" level=info msg="Connect containerd service" Jun 21 04:44:32.670354 containerd[1723]: time="2025-06-21T04:44:32.670235639Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 04:44:32.672304 containerd[1723]: time="2025-06-21T04:44:32.672276265Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 04:44:32.931207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:44:32.941527 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.409744395Z" level=info msg="Start subscribing containerd event" Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.409797838Z" level=info msg="Start recovering state" Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410021757Z" level=info msg="Start event monitor" Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410035255Z" level=info msg="Start cni network conf syncer for default" Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410042539Z" level=info msg="Start streaming server" Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410054439Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410061711Z" level=info msg="runtime interface starting up..." Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410066969Z" level=info msg="starting plugins..." Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410077628Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410311340Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 04:44:33.412186 containerd[1723]: time="2025-06-21T04:44:33.410343133Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 04:44:33.410526 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 04:44:33.413027 containerd[1723]: time="2025-06-21T04:44:33.413000059Z" level=info msg="containerd successfully booted in 0.767148s" Jun 21 04:44:33.414657 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 04:44:33.417076 systemd[1]: Startup finished in 3.005s (kernel) + 11.637s (initrd) + 8.739s (userspace) = 23.381s. Jun 21 04:44:33.485222 kubelet[1843]: E0621 04:44:33.485185 1843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:44:33.486724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:44:33.486841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:44:33.487088 systemd[1]: kubelet.service: Consumed 878ms CPU time, 264.8M memory peak. Jun 21 04:44:33.638410 waagent[1826]: 2025-06-21T04:44:33.638342Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 21 04:44:33.639963 waagent[1826]: 2025-06-21T04:44:33.639922Z INFO Daemon Daemon OS: flatcar 4372.0.0 Jun 21 04:44:33.641376 waagent[1826]: 2025-06-21T04:44:33.640374Z INFO Daemon Daemon Python: 3.11.12 Jun 21 04:44:33.642349 waagent[1826]: 2025-06-21T04:44:33.642313Z INFO Daemon Daemon Run daemon Jun 21 04:44:33.643148 waagent[1826]: 2025-06-21T04:44:33.643120Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.0' Jun 21 04:44:33.645040 waagent[1826]: 2025-06-21T04:44:33.644194Z INFO Daemon Daemon Using waagent for provisioning Jun 21 04:44:33.646469 waagent[1826]: 2025-06-21T04:44:33.646438Z INFO Daemon Daemon Activate resource disk Jun 21 04:44:33.647457 waagent[1826]: 2025-06-21T04:44:33.646924Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 21 04:44:33.650359 waagent[1826]: 2025-06-21T04:44:33.650321Z INFO Daemon Daemon Found device: None Jun 21 04:44:33.651343 waagent[1826]: 2025-06-21T04:44:33.651311Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 21 04:44:33.652449 waagent[1826]: 2025-06-21T04:44:33.651985Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 21 04:44:33.656821 waagent[1826]: 2025-06-21T04:44:33.656778Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 21 04:44:33.658026 waagent[1826]: 2025-06-21T04:44:33.657997Z INFO Daemon Daemon Running default provisioning handler Jun 21 04:44:33.664642 waagent[1826]: 2025-06-21T04:44:33.664451Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 21 04:44:33.668205 waagent[1826]: 2025-06-21T04:44:33.667441Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 21 04:44:33.668205 waagent[1826]: 2025-06-21T04:44:33.667572Z INFO Daemon Daemon cloud-init is enabled: False Jun 21 04:44:33.668205 waagent[1826]: 2025-06-21T04:44:33.667773Z INFO Daemon Daemon Copying ovf-env.xml Jun 21 04:44:33.700510 login[1828]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jun 21 04:44:33.702095 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 04:44:33.712396 systemd-logind[1702]: New session 2 of user core. Jun 21 04:44:33.712634 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 04:44:33.713993 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 04:44:33.728266 waagent[1826]: 2025-06-21T04:44:33.726691Z INFO Daemon Daemon Successfully mounted dvd Jun 21 04:44:33.738731 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 04:44:33.740276 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 21 04:44:33.742106 waagent[1826]: 2025-06-21T04:44:33.742066Z INFO Daemon Daemon Detect protocol endpoint Jun 21 04:44:33.743149 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 04:44:33.744523 waagent[1826]: 2025-06-21T04:44:33.744477Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 21 04:44:33.746775 waagent[1826]: 2025-06-21T04:44:33.746730Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 21 04:44:33.748655 waagent[1826]: 2025-06-21T04:44:33.748606Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 21 04:44:33.750029 waagent[1826]: 2025-06-21T04:44:33.749998Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 21 04:44:33.751387 waagent[1826]: 2025-06-21T04:44:33.751355Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 21 04:44:33.754056 (systemd)[1874]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 04:44:33.756147 systemd-logind[1702]: New session c1 of user core. Jun 21 04:44:33.764348 waagent[1826]: 2025-06-21T04:44:33.764312Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 21 04:44:33.766615 waagent[1826]: 2025-06-21T04:44:33.766592Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 21 04:44:33.767095 waagent[1826]: 2025-06-21T04:44:33.766710Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 21 04:44:33.860275 waagent[1826]: 2025-06-21T04:44:33.859493Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 21 04:44:33.860275 waagent[1826]: 2025-06-21T04:44:33.859663Z INFO Daemon Daemon Forcing an update of the goal state. Jun 21 04:44:33.864503 waagent[1826]: 2025-06-21T04:44:33.864475Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 21 04:44:33.875837 waagent[1826]: 2025-06-21T04:44:33.875815Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 21 04:44:33.876418 waagent[1826]: 2025-06-21T04:44:33.876385Z INFO Daemon Jun 21 04:44:33.876632 waagent[1826]: 2025-06-21T04:44:33.876614Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 26461def-7978-4921-afd2-155d15d09706 eTag: 4013892268322081057 source: Fabric] Jun 21 04:44:33.877177 waagent[1826]: 2025-06-21T04:44:33.877156Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 21 04:44:33.877538 waagent[1826]: 2025-06-21T04:44:33.877521Z INFO Daemon Jun 21 04:44:33.877657 waagent[1826]: 2025-06-21T04:44:33.877643Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 21 04:44:33.886267 waagent[1826]: 2025-06-21T04:44:33.886227Z INFO Daemon Daemon Downloading artifacts profile blob Jun 21 04:44:33.922487 systemd[1874]: Queued start job for default target default.target. Jun 21 04:44:33.935991 systemd[1874]: Created slice app.slice - User Application Slice. Jun 21 04:44:33.936085 systemd[1874]: Reached target paths.target - Paths. Jun 21 04:44:33.936183 systemd[1874]: Reached target timers.target - Timers. Jun 21 04:44:33.937039 systemd[1874]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 04:44:33.944897 systemd[1874]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 04:44:33.944945 systemd[1874]: Reached target sockets.target - Sockets. Jun 21 04:44:33.944986 systemd[1874]: Reached target basic.target - Basic System. Jun 21 04:44:33.945057 systemd[1874]: Reached target default.target - Main User Target. Jun 21 04:44:33.945080 systemd[1874]: Startup finished in 182ms. Jun 21 04:44:33.945108 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 04:44:33.949389 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 04:44:33.973685 waagent[1826]: 2025-06-21T04:44:33.973623Z INFO Daemon Downloaded certificate {'thumbprint': 'B70C9DE074B0AB08B0E1EB9A2848F0C65D52F716', 'hasPrivateKey': True} Jun 21 04:44:33.974315 waagent[1826]: 2025-06-21T04:44:33.974093Z INFO Daemon Fetch goal state completed Jun 21 04:44:33.982024 waagent[1826]: 2025-06-21T04:44:33.981967Z INFO Daemon Daemon Starting provisioning Jun 21 04:44:33.982548 waagent[1826]: 2025-06-21T04:44:33.982116Z INFO Daemon Daemon Handle ovf-env.xml. Jun 21 04:44:33.982548 waagent[1826]: 2025-06-21T04:44:33.982307Z INFO Daemon Daemon Set hostname [ci-4372.0.0-a-1fcff97c08] Jun 21 04:44:33.998511 waagent[1826]: 2025-06-21T04:44:33.998468Z INFO Daemon Daemon Publish hostname [ci-4372.0.0-a-1fcff97c08] Jun 21 04:44:33.999451 waagent[1826]: 2025-06-21T04:44:33.998744Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 21 04:44:33.999451 waagent[1826]: 2025-06-21T04:44:33.998973Z INFO Daemon Daemon Primary interface is [eth0] Jun 21 04:44:34.007341 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:44:34.011734 waagent[1826]: 2025-06-21T04:44:34.007782Z INFO Daemon Daemon Create user account if not exists Jun 21 04:44:34.011734 waagent[1826]: 2025-06-21T04:44:34.007961Z INFO Daemon Daemon User core already exists, skip useradd Jun 21 04:44:34.011734 waagent[1826]: 2025-06-21T04:44:34.008096Z INFO Daemon Daemon Configure sudoer Jun 21 04:44:34.011116 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:44:34.011143 systemd-networkd[1365]: eth0: DHCP lease lost Jun 21 04:44:34.014926 waagent[1826]: 2025-06-21T04:44:34.014888Z INFO Daemon Daemon Configure sshd Jun 21 04:44:34.022125 waagent[1826]: 2025-06-21T04:44:34.022079Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 21 04:44:34.022485 waagent[1826]: 2025-06-21T04:44:34.022239Z INFO Daemon Daemon Deploy ssh public key. Jun 21 04:44:34.044302 systemd-networkd[1365]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:44:34.700937 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 04:44:34.706028 systemd-logind[1702]: New session 1 of user core. Jun 21 04:44:34.715382 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 04:44:35.124953 waagent[1826]: 2025-06-21T04:44:35.124910Z INFO Daemon Daemon Provisioning complete Jun 21 04:44:35.136736 waagent[1826]: 2025-06-21T04:44:35.136697Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 21 04:44:35.137157 waagent[1826]: 2025-06-21T04:44:35.136926Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 21 04:44:35.137157 waagent[1826]: 2025-06-21T04:44:35.137202Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 21 04:44:35.231301 waagent[1914]: 2025-06-21T04:44:35.231227Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 21 04:44:35.231550 waagent[1914]: 2025-06-21T04:44:35.231335Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.0 Jun 21 04:44:35.231550 waagent[1914]: 2025-06-21T04:44:35.231379Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 21 04:44:35.231550 waagent[1914]: 2025-06-21T04:44:35.231421Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 21 04:44:35.250685 waagent[1914]: 2025-06-21T04:44:35.250644Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 21 04:44:35.250807 waagent[1914]: 2025-06-21T04:44:35.250785Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:44:35.250843 waagent[1914]: 2025-06-21T04:44:35.250831Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:44:35.261482 waagent[1914]: 2025-06-21T04:44:35.261429Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 21 04:44:35.269607 waagent[1914]: 2025-06-21T04:44:35.269577Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 21 04:44:35.269919 waagent[1914]: 2025-06-21T04:44:35.269892Z INFO ExtHandler Jun 21 04:44:35.269968 waagent[1914]: 2025-06-21T04:44:35.269939Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 69fb1ef7-a00d-4663-84ca-38e750939ac7 eTag: 4013892268322081057 source: Fabric] Jun 21 04:44:35.270150 waagent[1914]: 2025-06-21T04:44:35.270128Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 21 04:44:35.270463 waagent[1914]: 2025-06-21T04:44:35.270440Z INFO ExtHandler Jun 21 04:44:35.270503 waagent[1914]: 2025-06-21T04:44:35.270477Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 21 04:44:35.274719 waagent[1914]: 2025-06-21T04:44:35.274689Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 21 04:44:35.339607 waagent[1914]: 2025-06-21T04:44:35.339558Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B70C9DE074B0AB08B0E1EB9A2848F0C65D52F716', 'hasPrivateKey': True} Jun 21 04:44:35.339931 waagent[1914]: 2025-06-21T04:44:35.339902Z INFO ExtHandler Fetch goal state completed Jun 21 04:44:35.353836 waagent[1914]: 2025-06-21T04:44:35.353790Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 21 04:44:35.357940 waagent[1914]: 2025-06-21T04:44:35.357889Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1914 Jun 21 04:44:35.358030 waagent[1914]: 2025-06-21T04:44:35.358007Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 21 04:44:35.358263 waagent[1914]: 2025-06-21T04:44:35.358235Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 21 04:44:35.359196 waagent[1914]: 2025-06-21T04:44:35.359163Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 21 04:44:35.359494 waagent[1914]: 2025-06-21T04:44:35.359468Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 21 04:44:35.359595 waagent[1914]: 2025-06-21T04:44:35.359574Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 21 04:44:35.359955 waagent[1914]: 2025-06-21T04:44:35.359930Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 21 04:44:35.378119 waagent[1914]: 2025-06-21T04:44:35.378070Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 21 04:44:35.378212 waagent[1914]: 2025-06-21T04:44:35.378194Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 21 04:44:35.382998 waagent[1914]: 2025-06-21T04:44:35.382809Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 21 04:44:35.387933 systemd[1]: Reload requested from client PID 1929 ('systemctl') (unit waagent.service)... Jun 21 04:44:35.387944 systemd[1]: Reloading... Jun 21 04:44:35.460386 zram_generator::config[1963]: No configuration found. Jun 21 04:44:35.535112 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:44:35.624566 systemd[1]: Reloading finished in 236 ms. Jun 21 04:44:35.634306 waagent[1914]: 2025-06-21T04:44:35.633795Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 21 04:44:35.634306 waagent[1914]: 2025-06-21T04:44:35.633905Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 21 04:44:35.754692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#101 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 21 04:44:35.921763 waagent[1914]: 2025-06-21T04:44:35.921651Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 21 04:44:35.921977 waagent[1914]: 2025-06-21T04:44:35.921950Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 21 04:44:35.922724 waagent[1914]: 2025-06-21T04:44:35.922685Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 21 04:44:35.923019 waagent[1914]: 2025-06-21T04:44:35.922992Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:44:35.923110 waagent[1914]: 2025-06-21T04:44:35.923025Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 21 04:44:35.923353 waagent[1914]: 2025-06-21T04:44:35.923305Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:44:35.923450 waagent[1914]: 2025-06-21T04:44:35.923419Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:44:35.923741 waagent[1914]: 2025-06-21T04:44:35.923712Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 21 04:44:35.923960 waagent[1914]: 2025-06-21T04:44:35.923926Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:44:35.924045 waagent[1914]: 2025-06-21T04:44:35.923992Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 21 04:44:35.924334 waagent[1914]: 2025-06-21T04:44:35.924304Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 21 04:44:35.924460 waagent[1914]: 2025-06-21T04:44:35.924438Z INFO EnvHandler ExtHandler Configure routes Jun 21 04:44:35.924518 waagent[1914]: 2025-06-21T04:44:35.924498Z INFO EnvHandler ExtHandler Gateway:None Jun 21 04:44:35.924565 waagent[1914]: 2025-06-21T04:44:35.924549Z INFO EnvHandler ExtHandler Routes:None Jun 21 04:44:35.925262 waagent[1914]: 2025-06-21T04:44:35.925224Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 21 04:44:35.925670 waagent[1914]: 2025-06-21T04:44:35.925647Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 21 04:44:35.925670 waagent[1914]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 21 04:44:35.925670 waagent[1914]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 21 04:44:35.925670 waagent[1914]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 21 04:44:35.925670 waagent[1914]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:44:35.925670 waagent[1914]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:44:35.925670 waagent[1914]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:44:35.926656 waagent[1914]: 2025-06-21T04:44:35.926557Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 21 04:44:35.926785 waagent[1914]: 2025-06-21T04:44:35.926764Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 21 04:44:35.935105 waagent[1914]: 2025-06-21T04:44:35.935070Z INFO ExtHandler ExtHandler Jun 21 04:44:35.935159 waagent[1914]: 2025-06-21T04:44:35.935124Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: afa9145f-1ffc-49f8-b008-e750b2a48378 correlation e4c9b05f-73ab-4b9d-93a0-56fcec521aec created: 2025-06-21T04:43:33.737373Z] Jun 21 04:44:35.935405 waagent[1914]: 2025-06-21T04:44:35.935380Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 21 04:44:35.935734 waagent[1914]: 2025-06-21T04:44:35.935713Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 21 04:44:35.966413 waagent[1914]: 2025-06-21T04:44:35.966370Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 21 04:44:35.966413 waagent[1914]: Try `iptables -h' or 'iptables --help' for more information.) Jun 21 04:44:35.966708 waagent[1914]: 2025-06-21T04:44:35.966682Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 71E758F4-B578-4515-8221-A0F8E6D88946;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 21 04:44:35.987716 waagent[1914]: 2025-06-21T04:44:35.987673Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 21 04:44:35.987716 waagent[1914]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:35.987716 waagent[1914]: pkts bytes target prot opt in out source destination Jun 21 04:44:35.987716 waagent[1914]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:35.987716 waagent[1914]: pkts bytes target prot opt in out source destination Jun 21 04:44:35.987716 waagent[1914]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:35.987716 waagent[1914]: pkts bytes target prot opt in out source destination Jun 21 04:44:35.987716 waagent[1914]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 21 04:44:35.987716 waagent[1914]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 21 04:44:35.987716 waagent[1914]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 21 04:44:35.990123 waagent[1914]: 2025-06-21T04:44:35.990081Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 21 04:44:35.990123 waagent[1914]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:35.990123 waagent[1914]: pkts bytes target prot opt in out source destination Jun 21 04:44:35.990123 waagent[1914]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:35.990123 waagent[1914]: pkts bytes target prot opt in out source destination Jun 21 04:44:35.990123 waagent[1914]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:44:35.990123 waagent[1914]: pkts bytes target prot opt in out source destination Jun 21 04:44:35.990123 waagent[1914]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 21 04:44:35.990123 waagent[1914]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 21 04:44:35.990123 waagent[1914]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 21 04:44:36.085201 waagent[1914]: 2025-06-21T04:44:36.085135Z INFO MonitorHandler ExtHandler Network interfaces: Jun 21 04:44:36.085201 waagent[1914]: Executing ['ip', '-a', '-o', 'link']: Jun 21 04:44:36.085201 waagent[1914]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 21 04:44:36.085201 waagent[1914]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:4a:a4:fe brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 21 04:44:36.085201 waagent[1914]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:4a:a4:fe brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jun 21 04:44:36.085201 waagent[1914]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 21 04:44:36.085201 waagent[1914]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 21 04:44:36.085201 waagent[1914]: 2: eth0 inet 10.200.8.45/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 21 04:44:36.085201 waagent[1914]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 21 04:44:36.085201 waagent[1914]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 21 04:44:36.085201 waagent[1914]: 2: eth0 inet6 fe80::7eed:8dff:fe4a:a4fe/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 21 04:44:36.085201 waagent[1914]: 3: enP30832s1 inet6 fe80::7eed:8dff:fe4a:a4fe/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 21 04:44:43.574152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 04:44:43.575932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:44:44.204957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:44:44.210503 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:44:44.244826 kubelet[2065]: E0621 04:44:44.244794 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:44:44.247160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:44:44.247288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:44:44.247600 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109.1M memory peak. Jun 21 04:44:53.803359 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 04:44:53.804431 systemd[1]: Started sshd@0-10.200.8.45:22-10.200.16.10:39616.service - OpenSSH per-connection server daemon (10.200.16.10:39616). Jun 21 04:44:54.323932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 04:44:54.325224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:44:54.524872 sshd[2073]: Accepted publickey for core from 10.200.16.10 port 39616 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:54.526022 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:54.530700 systemd-logind[1702]: New session 3 of user core. Jun 21 04:44:54.542401 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 04:44:54.855226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:44:54.860473 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:44:54.902447 kubelet[2084]: E0621 04:44:54.902400 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:44:54.903876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:44:54.903989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:44:54.904235 systemd[1]: kubelet.service: Consumed 121ms CPU time, 108.7M memory peak. Jun 21 04:44:55.071111 systemd[1]: Started sshd@1-10.200.8.45:22-10.200.16.10:39622.service - OpenSSH per-connection server daemon (10.200.16.10:39622). Jun 21 04:44:55.366930 chronyd[1732]: Selected source PHC0 Jun 21 04:44:55.697510 sshd[2093]: Accepted publickey for core from 10.200.16.10 port 39622 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:55.698708 sshd-session[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:55.703093 systemd-logind[1702]: New session 4 of user core. Jun 21 04:44:55.712365 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 04:44:56.138684 sshd[2095]: Connection closed by 10.200.16.10 port 39622 Jun 21 04:44:56.139200 sshd-session[2093]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:56.141908 systemd[1]: sshd@1-10.200.8.45:22-10.200.16.10:39622.service: Deactivated successfully. Jun 21 04:44:56.143371 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 04:44:56.144621 systemd-logind[1702]: Session 4 logged out. Waiting for processes to exit. Jun 21 04:44:56.145538 systemd-logind[1702]: Removed session 4. Jun 21 04:44:56.267087 systemd[1]: Started sshd@2-10.200.8.45:22-10.200.16.10:39624.service - OpenSSH per-connection server daemon (10.200.16.10:39624). Jun 21 04:44:56.901741 sshd[2101]: Accepted publickey for core from 10.200.16.10 port 39624 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:56.903004 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:56.907317 systemd-logind[1702]: New session 5 of user core. Jun 21 04:44:56.915387 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 04:44:57.340970 sshd[2103]: Connection closed by 10.200.16.10 port 39624 Jun 21 04:44:57.341512 sshd-session[2101]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:57.345013 systemd[1]: sshd@2-10.200.8.45:22-10.200.16.10:39624.service: Deactivated successfully. Jun 21 04:44:57.346490 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 04:44:57.347120 systemd-logind[1702]: Session 5 logged out. Waiting for processes to exit. Jun 21 04:44:57.348385 systemd-logind[1702]: Removed session 5. Jun 21 04:44:57.455408 systemd[1]: Started sshd@3-10.200.8.45:22-10.200.16.10:39638.service - OpenSSH per-connection server daemon (10.200.16.10:39638). Jun 21 04:44:58.089308 sshd[2109]: Accepted publickey for core from 10.200.16.10 port 39638 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:58.090555 sshd-session[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:58.094682 systemd-logind[1702]: New session 6 of user core. Jun 21 04:44:58.100365 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 04:44:58.532965 sshd[2111]: Connection closed by 10.200.16.10 port 39638 Jun 21 04:44:58.533774 sshd-session[2109]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:58.536340 systemd[1]: sshd@3-10.200.8.45:22-10.200.16.10:39638.service: Deactivated successfully. Jun 21 04:44:58.538178 systemd-logind[1702]: Session 6 logged out. Waiting for processes to exit. Jun 21 04:44:58.538456 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 04:44:58.539855 systemd-logind[1702]: Removed session 6. Jun 21 04:44:58.647150 systemd[1]: Started sshd@4-10.200.8.45:22-10.200.16.10:40296.service - OpenSSH per-connection server daemon (10.200.16.10:40296). Jun 21 04:44:59.276436 sshd[2117]: Accepted publickey for core from 10.200.16.10 port 40296 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:44:59.277637 sshd-session[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:44:59.281808 systemd-logind[1702]: New session 7 of user core. Jun 21 04:44:59.284395 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 04:44:59.693112 sudo[2120]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 04:44:59.693330 sudo[2120]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:44:59.718056 sudo[2120]: pam_unix(sudo:session): session closed for user root Jun 21 04:44:59.817564 sshd[2119]: Connection closed by 10.200.16.10 port 40296 Jun 21 04:44:59.818183 sshd-session[2117]: pam_unix(sshd:session): session closed for user core Jun 21 04:44:59.821232 systemd[1]: sshd@4-10.200.8.45:22-10.200.16.10:40296.service: Deactivated successfully. Jun 21 04:44:59.822706 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 04:44:59.824140 systemd-logind[1702]: Session 7 logged out. Waiting for processes to exit. Jun 21 04:44:59.824986 systemd-logind[1702]: Removed session 7. Jun 21 04:44:59.930167 systemd[1]: Started sshd@5-10.200.8.45:22-10.200.16.10:40302.service - OpenSSH per-connection server daemon (10.200.16.10:40302). Jun 21 04:45:00.558392 sshd[2126]: Accepted publickey for core from 10.200.16.10 port 40302 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:00.559695 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:00.564164 systemd-logind[1702]: New session 8 of user core. Jun 21 04:45:00.571409 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 04:45:00.901010 sudo[2130]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 04:45:00.901231 sudo[2130]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:45:00.908561 sudo[2130]: pam_unix(sudo:session): session closed for user root Jun 21 04:45:00.912232 sudo[2129]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 04:45:00.912484 sudo[2129]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:45:00.919704 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:45:00.947610 augenrules[2152]: No rules Jun 21 04:45:00.948577 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:45:00.948790 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:45:00.949680 sudo[2129]: pam_unix(sudo:session): session closed for user root Jun 21 04:45:01.051268 sshd[2128]: Connection closed by 10.200.16.10 port 40302 Jun 21 04:45:01.051756 sshd-session[2126]: pam_unix(sshd:session): session closed for user core Jun 21 04:45:01.054832 systemd[1]: sshd@5-10.200.8.45:22-10.200.16.10:40302.service: Deactivated successfully. Jun 21 04:45:01.056131 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 04:45:01.056835 systemd-logind[1702]: Session 8 logged out. Waiting for processes to exit. Jun 21 04:45:01.057835 systemd-logind[1702]: Removed session 8. Jun 21 04:45:01.165367 systemd[1]: Started sshd@6-10.200.8.45:22-10.200.16.10:40318.service - OpenSSH per-connection server daemon (10.200.16.10:40318). Jun 21 04:45:01.793021 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 40318 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:01.794232 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:01.798536 systemd-logind[1702]: New session 9 of user core. Jun 21 04:45:01.807385 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 04:45:02.136180 sudo[2164]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 04:45:02.136392 sudo[2164]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:45:03.262347 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 04:45:03.271528 (dockerd)[2182]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 04:45:04.046434 dockerd[2182]: time="2025-06-21T04:45:04.046385702Z" level=info msg="Starting up" Jun 21 04:45:04.047211 dockerd[2182]: time="2025-06-21T04:45:04.047102575Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 04:45:04.087724 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport150034820-merged.mount: Deactivated successfully. Jun 21 04:45:04.179110 dockerd[2182]: time="2025-06-21T04:45:04.179077570Z" level=info msg="Loading containers: start." Jun 21 04:45:04.206274 kernel: Initializing XFRM netlink socket Jun 21 04:45:04.452206 systemd-networkd[1365]: docker0: Link UP Jun 21 04:45:04.463330 dockerd[2182]: time="2025-06-21T04:45:04.463301863Z" level=info msg="Loading containers: done." Jun 21 04:45:04.488839 dockerd[2182]: time="2025-06-21T04:45:04.488811211Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 04:45:04.488951 dockerd[2182]: time="2025-06-21T04:45:04.488873815Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 04:45:04.488985 dockerd[2182]: time="2025-06-21T04:45:04.488955724Z" level=info msg="Initializing buildkit" Jun 21 04:45:04.524940 dockerd[2182]: time="2025-06-21T04:45:04.524902117Z" level=info msg="Completed buildkit initialization" Jun 21 04:45:04.530084 dockerd[2182]: time="2025-06-21T04:45:04.530053752Z" level=info msg="Daemon has completed initialization" Jun 21 04:45:04.530152 dockerd[2182]: time="2025-06-21T04:45:04.530094891Z" level=info msg="API listen on /run/docker.sock" Jun 21 04:45:04.531480 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 04:45:05.074090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 21 04:45:05.075957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:05.636731 containerd[1723]: time="2025-06-21T04:45:05.636673263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 21 04:45:05.748131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:05.750861 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:45:05.792479 kubelet[2388]: E0621 04:45:05.792447 2388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:45:05.793806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:45:05.793923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:45:05.794331 systemd[1]: kubelet.service: Consumed 119ms CPU time, 108.5M memory peak. Jun 21 04:45:06.406624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110100386.mount: Deactivated successfully. Jun 21 04:45:07.533895 containerd[1723]: time="2025-06-21T04:45:07.533850066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:07.535907 containerd[1723]: time="2025-06-21T04:45:07.535874334Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jun 21 04:45:07.538150 containerd[1723]: time="2025-06-21T04:45:07.538112826Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:07.541531 containerd[1723]: time="2025-06-21T04:45:07.541493983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:07.542453 containerd[1723]: time="2025-06-21T04:45:07.541994673Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.905267148s" Jun 21 04:45:07.542453 containerd[1723]: time="2025-06-21T04:45:07.542025610Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 21 04:45:07.542681 containerd[1723]: time="2025-06-21T04:45:07.542652516Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 21 04:45:08.830596 containerd[1723]: time="2025-06-21T04:45:08.830561172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:08.832765 containerd[1723]: time="2025-06-21T04:45:08.832716833Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jun 21 04:45:08.835071 containerd[1723]: time="2025-06-21T04:45:08.835031678Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:08.838475 containerd[1723]: time="2025-06-21T04:45:08.838433600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:08.839380 containerd[1723]: time="2025-06-21T04:45:08.838966450Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.296268669s" Jun 21 04:45:08.839380 containerd[1723]: time="2025-06-21T04:45:08.838998222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 21 04:45:08.839584 containerd[1723]: time="2025-06-21T04:45:08.839559248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 21 04:45:09.992468 containerd[1723]: time="2025-06-21T04:45:09.992423065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:09.994829 containerd[1723]: time="2025-06-21T04:45:09.994795258Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jun 21 04:45:09.997400 containerd[1723]: time="2025-06-21T04:45:09.997366797Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:10.000570 containerd[1723]: time="2025-06-21T04:45:10.000533528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:10.001393 containerd[1723]: time="2025-06-21T04:45:10.001015663Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.161429922s" Jun 21 04:45:10.001393 containerd[1723]: time="2025-06-21T04:45:10.001044841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 21 04:45:10.001623 containerd[1723]: time="2025-06-21T04:45:10.001599037Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 21 04:45:10.920301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955497833.mount: Deactivated successfully. Jun 21 04:45:11.258671 containerd[1723]: time="2025-06-21T04:45:11.258576924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:11.260420 containerd[1723]: time="2025-06-21T04:45:11.260384599Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jun 21 04:45:11.262657 containerd[1723]: time="2025-06-21T04:45:11.262618401Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:11.267086 containerd[1723]: time="2025-06-21T04:45:11.267051095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:11.267484 containerd[1723]: time="2025-06-21T04:45:11.267345748Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.265692832s" Jun 21 04:45:11.267484 containerd[1723]: time="2025-06-21T04:45:11.267374365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 21 04:45:11.267807 containerd[1723]: time="2025-06-21T04:45:11.267783398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 04:45:11.829096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236130560.mount: Deactivated successfully. Jun 21 04:45:12.707029 containerd[1723]: time="2025-06-21T04:45:12.706984118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:12.709240 containerd[1723]: time="2025-06-21T04:45:12.709208786Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 21 04:45:12.711562 containerd[1723]: time="2025-06-21T04:45:12.711533991Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:12.714895 containerd[1723]: time="2025-06-21T04:45:12.714853475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:12.715585 containerd[1723]: time="2025-06-21T04:45:12.715459946Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.44765129s" Jun 21 04:45:12.715585 containerd[1723]: time="2025-06-21T04:45:12.715489152Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 04:45:12.716082 containerd[1723]: time="2025-06-21T04:45:12.716062077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 04:45:13.266376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307144699.mount: Deactivated successfully. Jun 21 04:45:13.289204 containerd[1723]: time="2025-06-21T04:45:13.289169065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:45:13.291326 containerd[1723]: time="2025-06-21T04:45:13.291296859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 21 04:45:13.295072 containerd[1723]: time="2025-06-21T04:45:13.295038450Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:45:13.299191 containerd[1723]: time="2025-06-21T04:45:13.299145254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:45:13.299678 containerd[1723]: time="2025-06-21T04:45:13.299539359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 583.453597ms" Jun 21 04:45:13.299678 containerd[1723]: time="2025-06-21T04:45:13.299563867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 04:45:13.300111 containerd[1723]: time="2025-06-21T04:45:13.300097165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 21 04:45:13.931731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011233471.mount: Deactivated successfully. Jun 21 04:45:15.543116 containerd[1723]: time="2025-06-21T04:45:15.543070763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:15.545238 containerd[1723]: time="2025-06-21T04:45:15.545206330Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jun 21 04:45:15.547609 containerd[1723]: time="2025-06-21T04:45:15.547572679Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:15.551446 containerd[1723]: time="2025-06-21T04:45:15.551404613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:15.552212 containerd[1723]: time="2025-06-21T04:45:15.552025060Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.251892794s" Jun 21 04:45:15.552212 containerd[1723]: time="2025-06-21T04:45:15.552051056Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 21 04:45:15.824234 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 21 04:45:15.826239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:16.562309 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 21 04:45:16.893488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:16.901443 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:45:16.937996 kubelet[2604]: E0621 04:45:16.937948 2604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:45:16.939834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:45:16.939973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:45:16.940627 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.4M memory peak. Jun 21 04:45:17.048156 update_engine[1704]: I20250621 04:45:17.048113 1704 update_attempter.cc:509] Updating boot flags... Jun 21 04:45:18.446246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:18.446389 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.4M memory peak. Jun 21 04:45:18.448463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:18.474921 systemd[1]: Reload requested from client PID 2642 ('systemctl') (unit session-9.scope)... Jun 21 04:45:18.475025 systemd[1]: Reloading... Jun 21 04:45:18.557297 zram_generator::config[2686]: No configuration found. Jun 21 04:45:18.669271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:45:18.755193 systemd[1]: Reloading finished in 279 ms. Jun 21 04:45:18.787717 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 04:45:18.787792 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 04:45:18.788027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:18.788079 systemd[1]: kubelet.service: Consumed 61ms CPU time, 70M memory peak. Jun 21 04:45:18.789406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:19.266203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:19.269583 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 04:45:19.303686 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:45:19.303686 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 04:45:19.303686 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:45:19.303906 kubelet[2755]: I0621 04:45:19.303760 2755 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 04:45:19.528404 kubelet[2755]: I0621 04:45:19.528339 2755 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 04:45:19.528404 kubelet[2755]: I0621 04:45:19.528357 2755 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 04:45:19.528694 kubelet[2755]: I0621 04:45:19.528589 2755 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 04:45:19.555726 kubelet[2755]: E0621 04:45:19.555685 2755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:19.557982 kubelet[2755]: I0621 04:45:19.557930 2755 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:45:19.563714 kubelet[2755]: I0621 04:45:19.563567 2755 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 04:45:19.567759 kubelet[2755]: I0621 04:45:19.567742 2755 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 04:45:19.569122 kubelet[2755]: I0621 04:45:19.569089 2755 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 04:45:19.569266 kubelet[2755]: I0621 04:45:19.569117 2755 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-a-1fcff97c08","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 04:45:19.569377 kubelet[2755]: I0621 04:45:19.569275 2755 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 04:45:19.569377 kubelet[2755]: I0621 04:45:19.569285 2755 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 04:45:19.569377 kubelet[2755]: I0621 04:45:19.569377 2755 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:45:19.572734 kubelet[2755]: I0621 04:45:19.572707 2755 kubelet.go:446] "Attempting to sync node with API server" Jun 21 04:45:19.574280 kubelet[2755]: I0621 04:45:19.574187 2755 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 04:45:19.574280 kubelet[2755]: I0621 04:45:19.574216 2755 kubelet.go:352] "Adding apiserver pod source" Jun 21 04:45:19.574280 kubelet[2755]: I0621 04:45:19.574226 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 04:45:19.577792 kubelet[2755]: W0621 04:45:19.577754 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-a-1fcff97c08&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jun 21 04:45:19.577862 kubelet[2755]: E0621 04:45:19.577804 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-a-1fcff97c08&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:19.578107 kubelet[2755]: W0621 04:45:19.578077 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jun 21 04:45:19.578139 kubelet[2755]: E0621 04:45:19.578123 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:19.578460 kubelet[2755]: I0621 04:45:19.578447 2755 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 04:45:19.578811 kubelet[2755]: I0621 04:45:19.578801 2755 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 04:45:19.579403 kubelet[2755]: W0621 04:45:19.579386 2755 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 04:45:19.581278 kubelet[2755]: I0621 04:45:19.581094 2755 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 04:45:19.581278 kubelet[2755]: I0621 04:45:19.581121 2755 server.go:1287] "Started kubelet" Jun 21 04:45:19.584270 kubelet[2755]: I0621 04:45:19.584214 2755 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 04:45:19.584977 kubelet[2755]: I0621 04:45:19.584954 2755 server.go:479] "Adding debug handlers to kubelet server" Jun 21 04:45:19.587820 kubelet[2755]: I0621 04:45:19.587040 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 04:45:19.587820 kubelet[2755]: I0621 04:45:19.587277 2755 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 04:45:19.588946 kubelet[2755]: E0621 04:45:19.587711 2755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.45:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.0-a-1fcff97c08.184af54dc6a8eec6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.0-a-1fcff97c08,UID:ci-4372.0.0-a-1fcff97c08,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.0-a-1fcff97c08,},FirstTimestamp:2025-06-21 04:45:19.581105862 +0000 UTC m=+0.308318218,LastTimestamp:2025-06-21 04:45:19.581105862 +0000 UTC m=+0.308318218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.0-a-1fcff97c08,}" Jun 21 04:45:19.590215 kubelet[2755]: I0621 04:45:19.590203 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 04:45:19.592124 kubelet[2755]: I0621 04:45:19.592105 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 04:45:19.593726 kubelet[2755]: I0621 04:45:19.593711 2755 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 04:45:19.595167 kubelet[2755]: I0621 04:45:19.593810 2755 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 04:45:19.595167 kubelet[2755]: E0621 04:45:19.593941 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:19.595284 kubelet[2755]: I0621 04:45:19.595200 2755 reconciler.go:26] "Reconciler: start to sync state" Jun 21 04:45:19.595602 kubelet[2755]: W0621 04:45:19.595569 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jun 21 04:45:19.595682 kubelet[2755]: E0621 04:45:19.595671 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:19.595786 kubelet[2755]: E0621 04:45:19.595762 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-1fcff97c08?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="200ms" Jun 21 04:45:19.596047 kubelet[2755]: I0621 04:45:19.596036 2755 factory.go:221] Registration of the systemd container factory successfully Jun 21 04:45:19.596151 kubelet[2755]: I0621 04:45:19.596140 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 04:45:19.597788 kubelet[2755]: E0621 04:45:19.597776 2755 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 04:45:19.597936 kubelet[2755]: I0621 04:45:19.597929 2755 factory.go:221] Registration of the containerd container factory successfully Jun 21 04:45:19.624074 kubelet[2755]: I0621 04:45:19.624065 2755 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 04:45:19.624162 kubelet[2755]: I0621 04:45:19.624150 2755 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 04:45:19.624225 kubelet[2755]: I0621 04:45:19.624212 2755 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:45:19.627559 kubelet[2755]: I0621 04:45:19.627517 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 04:45:19.629241 kubelet[2755]: I0621 04:45:19.628608 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 04:45:19.629241 kubelet[2755]: I0621 04:45:19.628624 2755 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 04:45:19.629241 kubelet[2755]: I0621 04:45:19.628638 2755 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 04:45:19.629241 kubelet[2755]: I0621 04:45:19.628644 2755 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 04:45:19.629241 kubelet[2755]: E0621 04:45:19.628674 2755 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 04:45:19.629763 kubelet[2755]: W0621 04:45:19.629712 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jun 21 04:45:19.629763 kubelet[2755]: E0621 04:45:19.629738 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:19.630729 kubelet[2755]: I0621 04:45:19.630692 2755 policy_none.go:49] "None policy: Start" Jun 21 04:45:19.630817 kubelet[2755]: I0621 04:45:19.630810 2755 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 04:45:19.630966 kubelet[2755]: I0621 04:45:19.630960 2755 state_mem.go:35] "Initializing new in-memory state store" Jun 21 04:45:19.638064 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 04:45:19.644943 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 04:45:19.647383 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 04:45:19.664707 kubelet[2755]: I0621 04:45:19.664693 2755 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 04:45:19.664837 kubelet[2755]: I0621 04:45:19.664829 2755 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 04:45:19.664865 kubelet[2755]: I0621 04:45:19.664840 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 04:45:19.665408 kubelet[2755]: I0621 04:45:19.665072 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 04:45:19.665565 kubelet[2755]: E0621 04:45:19.665553 2755 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 04:45:19.665604 kubelet[2755]: E0621 04:45:19.665588 2755 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:19.735819 systemd[1]: Created slice kubepods-burstable-pod264980e96f66386ba88be5da60ca99ca.slice - libcontainer container kubepods-burstable-pod264980e96f66386ba88be5da60ca99ca.slice. Jun 21 04:45:19.746309 kubelet[2755]: E0621 04:45:19.746151 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.748686 systemd[1]: Created slice kubepods-burstable-podcbce5050c557742376db8baac03ac895.slice - libcontainer container kubepods-burstable-podcbce5050c557742376db8baac03ac895.slice. Jun 21 04:45:19.750590 kubelet[2755]: E0621 04:45:19.750573 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.752178 systemd[1]: Created slice kubepods-burstable-pod2b8ef18b61571f8e3caad29de508036d.slice - libcontainer container kubepods-burstable-pod2b8ef18b61571f8e3caad29de508036d.slice. Jun 21 04:45:19.753690 kubelet[2755]: E0621 04:45:19.753658 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.766770 kubelet[2755]: I0621 04:45:19.766745 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.767034 kubelet[2755]: E0621 04:45:19.767005 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.796359 kubelet[2755]: E0621 04:45:19.796274 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-1fcff97c08?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="400ms" Jun 21 04:45:19.896760 kubelet[2755]: I0621 04:45:19.896727 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b8ef18b61571f8e3caad29de508036d-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-a-1fcff97c08\" (UID: \"2b8ef18b61571f8e3caad29de508036d\") " pod="kube-system/kube-scheduler-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.896853 kubelet[2755]: I0621 04:45:19.896782 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/264980e96f66386ba88be5da60ca99ca-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-a-1fcff97c08\" (UID: \"264980e96f66386ba88be5da60ca99ca\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.896853 kubelet[2755]: I0621 04:45:19.896841 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.896926 kubelet[2755]: I0621 04:45:19.896873 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.896926 kubelet[2755]: I0621 04:45:19.896916 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.896995 kubelet[2755]: I0621 04:45:19.896939 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.896995 kubelet[2755]: I0621 04:45:19.896968 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.897061 kubelet[2755]: I0621 04:45:19.897019 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/264980e96f66386ba88be5da60ca99ca-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-a-1fcff97c08\" (UID: \"264980e96f66386ba88be5da60ca99ca\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.897098 kubelet[2755]: I0621 04:45:19.897046 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/264980e96f66386ba88be5da60ca99ca-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-a-1fcff97c08\" (UID: \"264980e96f66386ba88be5da60ca99ca\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.969211 kubelet[2755]: I0621 04:45:19.969187 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:19.969573 kubelet[2755]: E0621 04:45:19.969542 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:20.048175 containerd[1723]: time="2025-06-21T04:45:20.048065612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-a-1fcff97c08,Uid:264980e96f66386ba88be5da60ca99ca,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:20.051712 containerd[1723]: time="2025-06-21T04:45:20.051672921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-a-1fcff97c08,Uid:cbce5050c557742376db8baac03ac895,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:20.054444 containerd[1723]: time="2025-06-21T04:45:20.054409118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-a-1fcff97c08,Uid:2b8ef18b61571f8e3caad29de508036d,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:20.134667 containerd[1723]: time="2025-06-21T04:45:20.134633345Z" level=info msg="connecting to shim 497413df0b2a20199cdb77bebb2f3d6ddd6a740546833565b2d2454fdca180b5" address="unix:///run/containerd/s/7aee8e9de7e69c6a7c4de7906b365010ab2c65ab6bc9175c0a5ee74efb88c956" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:20.162318 containerd[1723]: time="2025-06-21T04:45:20.160223922Z" level=info msg="connecting to shim cb99f886ef3f01a1f4818e1cbd1732cff01f28ec2a93059f891b33a6ab078b2e" address="unix:///run/containerd/s/d52f922abafb6cdd2ce525a5cd2750e64a65b15ee2641fda9ff8d53de284fbaa" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:20.162420 systemd[1]: Started cri-containerd-497413df0b2a20199cdb77bebb2f3d6ddd6a740546833565b2d2454fdca180b5.scope - libcontainer container 497413df0b2a20199cdb77bebb2f3d6ddd6a740546833565b2d2454fdca180b5. Jun 21 04:45:20.164101 containerd[1723]: time="2025-06-21T04:45:20.163884692Z" level=info msg="connecting to shim 3009550ea42994d1914f53be0ad05224db06c6ab70045a21f31386996dd552c2" address="unix:///run/containerd/s/4373c11b2bf9b87a8ac6fc939d99de7798ec69bdf44accc82716e2e49f9779dc" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:20.194386 systemd[1]: Started cri-containerd-3009550ea42994d1914f53be0ad05224db06c6ab70045a21f31386996dd552c2.scope - libcontainer container 3009550ea42994d1914f53be0ad05224db06c6ab70045a21f31386996dd552c2. Jun 21 04:45:20.196730 kubelet[2755]: E0621 04:45:20.196707 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-1fcff97c08?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="800ms" Jun 21 04:45:20.198801 systemd[1]: Started cri-containerd-cb99f886ef3f01a1f4818e1cbd1732cff01f28ec2a93059f891b33a6ab078b2e.scope - libcontainer container cb99f886ef3f01a1f4818e1cbd1732cff01f28ec2a93059f891b33a6ab078b2e. Jun 21 04:45:20.244360 containerd[1723]: time="2025-06-21T04:45:20.244318811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-a-1fcff97c08,Uid:264980e96f66386ba88be5da60ca99ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"497413df0b2a20199cdb77bebb2f3d6ddd6a740546833565b2d2454fdca180b5\"" Jun 21 04:45:20.247292 containerd[1723]: time="2025-06-21T04:45:20.247272724Z" level=info msg="CreateContainer within sandbox \"497413df0b2a20199cdb77bebb2f3d6ddd6a740546833565b2d2454fdca180b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 04:45:20.255927 containerd[1723]: time="2025-06-21T04:45:20.255904088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-a-1fcff97c08,Uid:cbce5050c557742376db8baac03ac895,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb99f886ef3f01a1f4818e1cbd1732cff01f28ec2a93059f891b33a6ab078b2e\"" Jun 21 04:45:20.260580 containerd[1723]: time="2025-06-21T04:45:20.260551154Z" level=info msg="CreateContainer within sandbox \"cb99f886ef3f01a1f4818e1cbd1732cff01f28ec2a93059f891b33a6ab078b2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 04:45:20.269977 containerd[1723]: time="2025-06-21T04:45:20.269959366Z" level=info msg="Container 56ce10cd198befb08016379e3c6a44dca062dd0eb163735d8e25efc952ca9d25: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:20.276299 containerd[1723]: time="2025-06-21T04:45:20.276276869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-a-1fcff97c08,Uid:2b8ef18b61571f8e3caad29de508036d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3009550ea42994d1914f53be0ad05224db06c6ab70045a21f31386996dd552c2\"" Jun 21 04:45:20.278950 containerd[1723]: time="2025-06-21T04:45:20.278927320Z" level=info msg="CreateContainer within sandbox \"3009550ea42994d1914f53be0ad05224db06c6ab70045a21f31386996dd552c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 04:45:20.292093 containerd[1723]: time="2025-06-21T04:45:20.292071340Z" level=info msg="CreateContainer within sandbox \"497413df0b2a20199cdb77bebb2f3d6ddd6a740546833565b2d2454fdca180b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"56ce10cd198befb08016379e3c6a44dca062dd0eb163735d8e25efc952ca9d25\"" Jun 21 04:45:20.292528 containerd[1723]: time="2025-06-21T04:45:20.292510974Z" level=info msg="StartContainer for \"56ce10cd198befb08016379e3c6a44dca062dd0eb163735d8e25efc952ca9d25\"" Jun 21 04:45:20.293282 containerd[1723]: time="2025-06-21T04:45:20.293230479Z" level=info msg="connecting to shim 56ce10cd198befb08016379e3c6a44dca062dd0eb163735d8e25efc952ca9d25" address="unix:///run/containerd/s/7aee8e9de7e69c6a7c4de7906b365010ab2c65ab6bc9175c0a5ee74efb88c956" protocol=ttrpc version=3 Jun 21 04:45:20.300464 containerd[1723]: time="2025-06-21T04:45:20.299732793Z" level=info msg="Container 65315594fc6e3ce0c70fcff2a6d2a749ce756b8a04888969280b647590369426: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:20.310374 systemd[1]: Started cri-containerd-56ce10cd198befb08016379e3c6a44dca062dd0eb163735d8e25efc952ca9d25.scope - libcontainer container 56ce10cd198befb08016379e3c6a44dca062dd0eb163735d8e25efc952ca9d25. Jun 21 04:45:20.322576 containerd[1723]: time="2025-06-21T04:45:20.322549797Z" level=info msg="CreateContainer within sandbox \"cb99f886ef3f01a1f4818e1cbd1732cff01f28ec2a93059f891b33a6ab078b2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65315594fc6e3ce0c70fcff2a6d2a749ce756b8a04888969280b647590369426\"" Jun 21 04:45:20.322918 containerd[1723]: time="2025-06-21T04:45:20.322896939Z" level=info msg="StartContainer for \"65315594fc6e3ce0c70fcff2a6d2a749ce756b8a04888969280b647590369426\"" Jun 21 04:45:20.323861 containerd[1723]: time="2025-06-21T04:45:20.323829903Z" level=info msg="connecting to shim 65315594fc6e3ce0c70fcff2a6d2a749ce756b8a04888969280b647590369426" address="unix:///run/containerd/s/d52f922abafb6cdd2ce525a5cd2750e64a65b15ee2641fda9ff8d53de284fbaa" protocol=ttrpc version=3 Jun 21 04:45:20.327744 containerd[1723]: time="2025-06-21T04:45:20.327722792Z" level=info msg="Container 9f9e6d235da15bd6009c22b0e8a8519dc5273469bb253214d303ee7771534b9b: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:20.342521 systemd[1]: Started cri-containerd-65315594fc6e3ce0c70fcff2a6d2a749ce756b8a04888969280b647590369426.scope - libcontainer container 65315594fc6e3ce0c70fcff2a6d2a749ce756b8a04888969280b647590369426. Jun 21 04:45:20.347727 containerd[1723]: time="2025-06-21T04:45:20.347701472Z" level=info msg="CreateContainer within sandbox \"3009550ea42994d1914f53be0ad05224db06c6ab70045a21f31386996dd552c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f9e6d235da15bd6009c22b0e8a8519dc5273469bb253214d303ee7771534b9b\"" Jun 21 04:45:20.348375 containerd[1723]: time="2025-06-21T04:45:20.348347838Z" level=info msg="StartContainer for \"9f9e6d235da15bd6009c22b0e8a8519dc5273469bb253214d303ee7771534b9b\"" Jun 21 04:45:20.349693 containerd[1723]: time="2025-06-21T04:45:20.349287614Z" level=info msg="connecting to shim 9f9e6d235da15bd6009c22b0e8a8519dc5273469bb253214d303ee7771534b9b" address="unix:///run/containerd/s/4373c11b2bf9b87a8ac6fc939d99de7798ec69bdf44accc82716e2e49f9779dc" protocol=ttrpc version=3 Jun 21 04:45:20.362613 containerd[1723]: time="2025-06-21T04:45:20.362594278Z" level=info msg="StartContainer for \"56ce10cd198befb08016379e3c6a44dca062dd0eb163735d8e25efc952ca9d25\" returns successfully" Jun 21 04:45:20.367382 systemd[1]: Started cri-containerd-9f9e6d235da15bd6009c22b0e8a8519dc5273469bb253214d303ee7771534b9b.scope - libcontainer container 9f9e6d235da15bd6009c22b0e8a8519dc5273469bb253214d303ee7771534b9b. Jun 21 04:45:20.371685 kubelet[2755]: I0621 04:45:20.371652 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:20.372713 kubelet[2755]: E0621 04:45:20.372691 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:20.405753 kubelet[2755]: W0621 04:45:20.405680 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-a-1fcff97c08&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jun 21 04:45:20.405753 kubelet[2755]: E0621 04:45:20.405734 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-a-1fcff97c08&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 04:45:20.424431 containerd[1723]: time="2025-06-21T04:45:20.424407687Z" level=info msg="StartContainer for \"65315594fc6e3ce0c70fcff2a6d2a749ce756b8a04888969280b647590369426\" returns successfully" Jun 21 04:45:20.451669 containerd[1723]: time="2025-06-21T04:45:20.451637795Z" level=info msg="StartContainer for \"9f9e6d235da15bd6009c22b0e8a8519dc5273469bb253214d303ee7771534b9b\" returns successfully" Jun 21 04:45:20.634604 kubelet[2755]: E0621 04:45:20.634541 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:20.634777 kubelet[2755]: E0621 04:45:20.634717 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:20.640262 kubelet[2755]: E0621 04:45:20.638460 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:21.174388 kubelet[2755]: I0621 04:45:21.174368 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:21.640265 kubelet[2755]: E0621 04:45:21.640225 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:21.640582 kubelet[2755]: E0621 04:45:21.640529 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:21.640797 kubelet[2755]: E0621 04:45:21.640782 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:21.814995 kubelet[2755]: E0621 04:45:21.814958 2755 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:21.924426 kubelet[2755]: I0621 04:45:21.923973 2755 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:21.924426 kubelet[2755]: E0621 04:45:21.924357 2755 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.0-a-1fcff97c08\": node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:21.934992 kubelet[2755]: E0621 04:45:21.934966 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.035625 kubelet[2755]: E0621 04:45:22.035602 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.136537 kubelet[2755]: E0621 04:45:22.136506 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.237549 kubelet[2755]: E0621 04:45:22.237486 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.338012 kubelet[2755]: E0621 04:45:22.337982 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.438646 kubelet[2755]: E0621 04:45:22.438623 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.539231 kubelet[2755]: E0621 04:45:22.539143 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.639321 kubelet[2755]: E0621 04:45:22.639280 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.641978 kubelet[2755]: E0621 04:45:22.641961 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:22.739797 kubelet[2755]: E0621 04:45:22.739775 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.840775 kubelet[2755]: E0621 04:45:22.840738 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:22.941300 kubelet[2755]: E0621 04:45:22.941243 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:23.041953 kubelet[2755]: E0621 04:45:23.041905 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:23.143394 kubelet[2755]: E0621 04:45:23.143021 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:23.243700 kubelet[2755]: E0621 04:45:23.243677 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:23.395296 kubelet[2755]: I0621 04:45:23.395199 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:23.404005 kubelet[2755]: W0621 04:45:23.403964 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:23.404318 kubelet[2755]: I0621 04:45:23.404276 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:23.409648 kubelet[2755]: W0621 04:45:23.409583 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:23.409814 kubelet[2755]: I0621 04:45:23.409689 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:23.415088 kubelet[2755]: W0621 04:45:23.415067 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:23.580556 kubelet[2755]: I0621 04:45:23.580541 2755 apiserver.go:52] "Watching apiserver" Jun 21 04:45:23.595562 kubelet[2755]: I0621 04:45:23.595541 2755 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 04:45:23.713626 systemd[1]: Reload requested from client PID 3020 ('systemctl') (unit session-9.scope)... Jun 21 04:45:23.713640 systemd[1]: Reloading... Jun 21 04:45:23.785304 zram_generator::config[3065]: No configuration found. Jun 21 04:45:23.868452 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:45:23.964233 systemd[1]: Reloading finished in 250 ms. Jun 21 04:45:23.986198 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:23.996976 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 04:45:23.997188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:23.997234 systemd[1]: kubelet.service: Consumed 567ms CPU time, 131.5M memory peak. Jun 21 04:45:23.998474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:24.498398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:24.504611 (kubelet)[3133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 04:45:24.543006 kubelet[3133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:45:24.543207 kubelet[3133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 04:45:24.543207 kubelet[3133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:45:24.543207 kubelet[3133]: I0621 04:45:24.543099 3133 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 04:45:24.548581 kubelet[3133]: I0621 04:45:24.548561 3133 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 04:45:24.548581 kubelet[3133]: I0621 04:45:24.548577 3133 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 04:45:24.548782 kubelet[3133]: I0621 04:45:24.548769 3133 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 04:45:24.549609 kubelet[3133]: I0621 04:45:24.549590 3133 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 04:45:24.553138 kubelet[3133]: I0621 04:45:24.552443 3133 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:45:24.555760 kubelet[3133]: I0621 04:45:24.555742 3133 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 04:45:24.559193 kubelet[3133]: I0621 04:45:24.559174 3133 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 04:45:24.559376 kubelet[3133]: I0621 04:45:24.559357 3133 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 04:45:24.559619 kubelet[3133]: I0621 04:45:24.559380 3133 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-a-1fcff97c08","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 04:45:24.559712 kubelet[3133]: I0621 04:45:24.559631 3133 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 04:45:24.559712 kubelet[3133]: I0621 04:45:24.559639 3133 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 04:45:24.559712 kubelet[3133]: I0621 04:45:24.559681 3133 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:45:24.559795 kubelet[3133]: I0621 04:45:24.559787 3133 kubelet.go:446] "Attempting to sync node with API server" Jun 21 04:45:24.559815 kubelet[3133]: I0621 04:45:24.559805 3133 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 04:45:24.559832 kubelet[3133]: I0621 04:45:24.559825 3133 kubelet.go:352] "Adding apiserver pod source" Jun 21 04:45:24.559851 kubelet[3133]: I0621 04:45:24.559834 3133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 04:45:24.562480 kubelet[3133]: I0621 04:45:24.562463 3133 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 04:45:24.562810 kubelet[3133]: I0621 04:45:24.562798 3133 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 04:45:24.563470 kubelet[3133]: I0621 04:45:24.563457 3133 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 04:45:24.563680 kubelet[3133]: I0621 04:45:24.563486 3133 server.go:1287] "Started kubelet" Jun 21 04:45:24.566974 kubelet[3133]: I0621 04:45:24.566910 3133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 04:45:24.572295 kubelet[3133]: I0621 04:45:24.572149 3133 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 04:45:24.573854 kubelet[3133]: I0621 04:45:24.573202 3133 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 04:45:24.574146 kubelet[3133]: E0621 04:45:24.573997 3133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-1fcff97c08\" not found" Jun 21 04:45:24.574146 kubelet[3133]: I0621 04:45:24.573877 3133 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 04:45:24.575477 kubelet[3133]: I0621 04:45:24.575462 3133 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 04:45:24.575574 kubelet[3133]: I0621 04:45:24.573845 3133 server.go:479] "Adding debug handlers to kubelet server" Jun 21 04:45:24.577242 kubelet[3133]: I0621 04:45:24.573885 3133 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 04:45:24.577345 kubelet[3133]: I0621 04:45:24.577335 3133 reconciler.go:26] "Reconciler: start to sync state" Jun 21 04:45:24.577509 kubelet[3133]: I0621 04:45:24.577500 3133 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 04:45:24.580274 kubelet[3133]: I0621 04:45:24.579823 3133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 04:45:24.581625 kubelet[3133]: I0621 04:45:24.581347 3133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 04:45:24.581625 kubelet[3133]: I0621 04:45:24.581376 3133 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 04:45:24.581625 kubelet[3133]: I0621 04:45:24.581406 3133 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 04:45:24.581625 kubelet[3133]: I0621 04:45:24.581413 3133 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 04:45:24.581625 kubelet[3133]: E0621 04:45:24.581448 3133 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 04:45:24.583522 kubelet[3133]: I0621 04:45:24.583500 3133 factory.go:221] Registration of the systemd container factory successfully Jun 21 04:45:24.583587 kubelet[3133]: I0621 04:45:24.583568 3133 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 04:45:24.591277 kubelet[3133]: E0621 04:45:24.589757 3133 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 04:45:24.591277 kubelet[3133]: I0621 04:45:24.589905 3133 factory.go:221] Registration of the containerd container factory successfully Jun 21 04:45:24.627935 kubelet[3133]: I0621 04:45:24.627921 3133 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 04:45:24.627935 kubelet[3133]: I0621 04:45:24.627934 3133 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 04:45:24.628018 kubelet[3133]: I0621 04:45:24.627948 3133 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:45:24.628090 kubelet[3133]: I0621 04:45:24.628079 3133 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 04:45:24.628120 kubelet[3133]: I0621 04:45:24.628091 3133 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 04:45:24.628120 kubelet[3133]: I0621 04:45:24.628106 3133 policy_none.go:49] "None policy: Start" Jun 21 04:45:24.628120 kubelet[3133]: I0621 04:45:24.628116 3133 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 04:45:24.628184 kubelet[3133]: I0621 04:45:24.628125 3133 state_mem.go:35] "Initializing new in-memory state store" Jun 21 04:45:24.628263 kubelet[3133]: I0621 04:45:24.628236 3133 state_mem.go:75] "Updated machine memory state" Jun 21 04:45:24.631187 kubelet[3133]: I0621 04:45:24.631172 3133 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 04:45:24.631688 kubelet[3133]: I0621 04:45:24.631300 3133 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 04:45:24.631688 kubelet[3133]: I0621 04:45:24.631310 3133 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 04:45:24.631688 kubelet[3133]: I0621 04:45:24.631624 3133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 04:45:24.633321 kubelet[3133]: E0621 04:45:24.633306 3133 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 04:45:24.682427 kubelet[3133]: I0621 04:45:24.682398 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.682573 kubelet[3133]: I0621 04:45:24.682561 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.682657 kubelet[3133]: I0621 04:45:24.682494 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.691653 kubelet[3133]: W0621 04:45:24.691632 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:24.691783 kubelet[3133]: E0621 04:45:24.691773 3133 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-a-1fcff97c08\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.691885 kubelet[3133]: W0621 04:45:24.691818 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:24.691913 kubelet[3133]: E0621 04:45:24.691900 3133 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" already exists" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.691936 kubelet[3133]: W0621 04:45:24.691684 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:24.691936 kubelet[3133]: E0621 04:45:24.691933 3133 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.0-a-1fcff97c08\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.724529 sudo[3165]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 21 04:45:24.724740 sudo[3165]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 21 04:45:24.734028 kubelet[3133]: I0621 04:45:24.734011 3133 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.743591 kubelet[3133]: I0621 04:45:24.743501 3133 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.743591 kubelet[3133]: I0621 04:45:24.743549 3133 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778520 kubelet[3133]: I0621 04:45:24.778326 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778520 kubelet[3133]: I0621 04:45:24.778349 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778520 kubelet[3133]: I0621 04:45:24.778363 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778520 kubelet[3133]: I0621 04:45:24.778378 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778520 kubelet[3133]: I0621 04:45:24.778396 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b8ef18b61571f8e3caad29de508036d-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-a-1fcff97c08\" (UID: \"2b8ef18b61571f8e3caad29de508036d\") " pod="kube-system/kube-scheduler-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778667 kubelet[3133]: I0621 04:45:24.778408 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/264980e96f66386ba88be5da60ca99ca-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-a-1fcff97c08\" (UID: \"264980e96f66386ba88be5da60ca99ca\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778667 kubelet[3133]: I0621 04:45:24.778423 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/264980e96f66386ba88be5da60ca99ca-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-a-1fcff97c08\" (UID: \"264980e96f66386ba88be5da60ca99ca\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778667 kubelet[3133]: I0621 04:45:24.778439 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbce5050c557742376db8baac03ac895-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" (UID: \"cbce5050c557742376db8baac03ac895\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:24.778667 kubelet[3133]: I0621 04:45:24.778454 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/264980e96f66386ba88be5da60ca99ca-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-a-1fcff97c08\" (UID: \"264980e96f66386ba88be5da60ca99ca\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:25.193563 sudo[3165]: pam_unix(sudo:session): session closed for user root Jun 21 04:45:25.562569 kubelet[3133]: I0621 04:45:25.562494 3133 apiserver.go:52] "Watching apiserver" Jun 21 04:45:25.577327 kubelet[3133]: I0621 04:45:25.577294 3133 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 04:45:25.617440 kubelet[3133]: I0621 04:45:25.617423 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:25.617542 kubelet[3133]: I0621 04:45:25.617530 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:25.617800 kubelet[3133]: I0621 04:45:25.617734 3133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:25.626765 kubelet[3133]: W0621 04:45:25.626745 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:25.626999 kubelet[3133]: E0621 04:45:25.626880 3133 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.0-a-1fcff97c08\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:25.627976 kubelet[3133]: W0621 04:45:25.627946 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:25.628126 kubelet[3133]: E0621 04:45:25.628075 3133 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.0-a-1fcff97c08\" already exists" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:25.628268 kubelet[3133]: W0621 04:45:25.628220 3133 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 04:45:25.628896 kubelet[3133]: E0621 04:45:25.628878 3133 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-a-1fcff97c08\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" Jun 21 04:45:25.651282 kubelet[3133]: I0621 04:45:25.649430 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-1fcff97c08" podStartSLOduration=2.6494183529999997 podStartE2EDuration="2.649418353s" podCreationTimestamp="2025-06-21 04:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:25.648232092 +0000 UTC m=+1.140217149" watchObservedRunningTime="2025-06-21 04:45:25.649418353 +0000 UTC m=+1.141403411" Jun 21 04:45:25.687120 kubelet[3133]: I0621 04:45:25.687077 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.0-a-1fcff97c08" podStartSLOduration=2.687064135 podStartE2EDuration="2.687064135s" podCreationTimestamp="2025-06-21 04:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:25.663462495 +0000 UTC m=+1.155447567" watchObservedRunningTime="2025-06-21 04:45:25.687064135 +0000 UTC m=+1.179049194" Jun 21 04:45:25.701556 kubelet[3133]: I0621 04:45:25.701371 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.0-a-1fcff97c08" podStartSLOduration=2.701360073 podStartE2EDuration="2.701360073s" podCreationTimestamp="2025-06-21 04:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:25.688165941 +0000 UTC m=+1.180151002" watchObservedRunningTime="2025-06-21 04:45:25.701360073 +0000 UTC m=+1.193345126" Jun 21 04:45:26.333107 sudo[2164]: pam_unix(sudo:session): session closed for user root Jun 21 04:45:26.432442 sshd[2163]: Connection closed by 10.200.16.10 port 40318 Jun 21 04:45:26.432903 sshd-session[2161]: pam_unix(sshd:session): session closed for user core Jun 21 04:45:26.435567 systemd[1]: sshd@6-10.200.8.45:22-10.200.16.10:40318.service: Deactivated successfully. Jun 21 04:45:26.437956 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 04:45:26.438130 systemd[1]: session-9.scope: Consumed 3.622s CPU time, 271.2M memory peak. Jun 21 04:45:26.439947 systemd-logind[1702]: Session 9 logged out. Waiting for processes to exit. Jun 21 04:45:26.441187 systemd-logind[1702]: Removed session 9. Jun 21 04:45:30.272744 kubelet[3133]: I0621 04:45:30.272720 3133 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 04:45:30.273107 containerd[1723]: time="2025-06-21T04:45:30.273067693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 04:45:30.273305 kubelet[3133]: I0621 04:45:30.273224 3133 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 04:45:31.168957 systemd[1]: Created slice kubepods-besteffort-pod91632d95_cabc_4038_bb67_cb81df901795.slice - libcontainer container kubepods-besteffort-pod91632d95_cabc_4038_bb67_cb81df901795.slice. Jun 21 04:45:31.187076 systemd[1]: Created slice kubepods-burstable-pod6963d713_8ef4_402e_87e8_357650a64194.slice - libcontainer container kubepods-burstable-pod6963d713_8ef4_402e_87e8_357650a64194.slice. Jun 21 04:45:31.219007 kubelet[3133]: I0621 04:45:31.218977 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cilium-cgroup\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219104 kubelet[3133]: I0621 04:45:31.219014 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6963d713-8ef4-402e-87e8-357650a64194-cilium-config-path\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219104 kubelet[3133]: I0621 04:45:31.219031 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-host-proc-sys-kernel\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219104 kubelet[3133]: I0621 04:45:31.219048 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cni-path\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219104 kubelet[3133]: I0621 04:45:31.219065 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-etc-cni-netd\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219104 kubelet[3133]: I0621 04:45:31.219080 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-host-proc-sys-net\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219104 kubelet[3133]: I0621 04:45:31.219094 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-hostproc\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219263 kubelet[3133]: I0621 04:45:31.219110 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zhcr\" (UniqueName: \"kubernetes.io/projected/6963d713-8ef4-402e-87e8-357650a64194-kube-api-access-9zhcr\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219263 kubelet[3133]: I0621 04:45:31.219132 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91632d95-cabc-4038-bb67-cb81df901795-lib-modules\") pod \"kube-proxy-wfkt2\" (UID: \"91632d95-cabc-4038-bb67-cb81df901795\") " pod="kube-system/kube-proxy-wfkt2" Jun 21 04:45:31.219263 kubelet[3133]: I0621 04:45:31.219149 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91632d95-cabc-4038-bb67-cb81df901795-kube-proxy\") pod \"kube-proxy-wfkt2\" (UID: \"91632d95-cabc-4038-bb67-cb81df901795\") " pod="kube-system/kube-proxy-wfkt2" Jun 21 04:45:31.219263 kubelet[3133]: I0621 04:45:31.219166 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-bpf-maps\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219263 kubelet[3133]: I0621 04:45:31.219182 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-lib-modules\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219263 kubelet[3133]: I0621 04:45:31.219201 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6963d713-8ef4-402e-87e8-357650a64194-hubble-tls\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219395 kubelet[3133]: I0621 04:45:31.219224 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91632d95-cabc-4038-bb67-cb81df901795-xtables-lock\") pod \"kube-proxy-wfkt2\" (UID: \"91632d95-cabc-4038-bb67-cb81df901795\") " pod="kube-system/kube-proxy-wfkt2" Jun 21 04:45:31.219395 kubelet[3133]: I0621 04:45:31.219241 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmvh8\" (UniqueName: \"kubernetes.io/projected/91632d95-cabc-4038-bb67-cb81df901795-kube-api-access-jmvh8\") pod \"kube-proxy-wfkt2\" (UID: \"91632d95-cabc-4038-bb67-cb81df901795\") " pod="kube-system/kube-proxy-wfkt2" Jun 21 04:45:31.219395 kubelet[3133]: I0621 04:45:31.219279 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cilium-run\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219395 kubelet[3133]: I0621 04:45:31.219299 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-xtables-lock\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.219395 kubelet[3133]: I0621 04:45:31.219320 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6963d713-8ef4-402e-87e8-357650a64194-clustermesh-secrets\") pod \"cilium-rmmlm\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " pod="kube-system/cilium-rmmlm" Jun 21 04:45:31.385702 systemd[1]: Created slice kubepods-besteffort-podc7b84089_9d19_476d_998c_297d2d0892dd.slice - libcontainer container kubepods-besteffort-podc7b84089_9d19_476d_998c_297d2d0892dd.slice. Jun 21 04:45:31.428317 kubelet[3133]: I0621 04:45:31.427895 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7b84089-9d19-476d-998c-297d2d0892dd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2wzph\" (UID: \"c7b84089-9d19-476d-998c-297d2d0892dd\") " pod="kube-system/cilium-operator-6c4d7847fc-2wzph" Jun 21 04:45:31.428317 kubelet[3133]: I0621 04:45:31.427938 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6cp4\" (UniqueName: \"kubernetes.io/projected/c7b84089-9d19-476d-998c-297d2d0892dd-kube-api-access-m6cp4\") pod \"cilium-operator-6c4d7847fc-2wzph\" (UID: \"c7b84089-9d19-476d-998c-297d2d0892dd\") " pod="kube-system/cilium-operator-6c4d7847fc-2wzph" Jun 21 04:45:31.476749 containerd[1723]: time="2025-06-21T04:45:31.476712677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfkt2,Uid:91632d95-cabc-4038-bb67-cb81df901795,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:31.491422 containerd[1723]: time="2025-06-21T04:45:31.491390504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmmlm,Uid:6963d713-8ef4-402e-87e8-357650a64194,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:31.535948 containerd[1723]: time="2025-06-21T04:45:31.535795893Z" level=info msg="connecting to shim d68b3d686be63e8e769c6f2f77f98c3e1cc0181804c667c6813ca018f5310f4a" address="unix:///run/containerd/s/9d57085f9ee77ab22af67ec64547e001d01f0496bb5a2ba7f34ef96d62b80d3f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:31.553665 containerd[1723]: time="2025-06-21T04:45:31.553635987Z" level=info msg="connecting to shim f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875" address="unix:///run/containerd/s/7a84087143cb9b87c1e82ac5e7bba5ff3e089c64bd083a8fb6e9302635eedf3f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:31.564405 systemd[1]: Started cri-containerd-d68b3d686be63e8e769c6f2f77f98c3e1cc0181804c667c6813ca018f5310f4a.scope - libcontainer container d68b3d686be63e8e769c6f2f77f98c3e1cc0181804c667c6813ca018f5310f4a. Jun 21 04:45:31.572175 systemd[1]: Started cri-containerd-f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875.scope - libcontainer container f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875. Jun 21 04:45:31.590799 containerd[1723]: time="2025-06-21T04:45:31.590559478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfkt2,Uid:91632d95-cabc-4038-bb67-cb81df901795,Namespace:kube-system,Attempt:0,} returns sandbox id \"d68b3d686be63e8e769c6f2f77f98c3e1cc0181804c667c6813ca018f5310f4a\"" Jun 21 04:45:31.595061 containerd[1723]: time="2025-06-21T04:45:31.595000471Z" level=info msg="CreateContainer within sandbox \"d68b3d686be63e8e769c6f2f77f98c3e1cc0181804c667c6813ca018f5310f4a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 04:45:31.600098 containerd[1723]: time="2025-06-21T04:45:31.600066321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmmlm,Uid:6963d713-8ef4-402e-87e8-357650a64194,Namespace:kube-system,Attempt:0,} returns sandbox id \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\"" Jun 21 04:45:31.601142 containerd[1723]: time="2025-06-21T04:45:31.601099584Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 21 04:45:31.616588 containerd[1723]: time="2025-06-21T04:45:31.616567507Z" level=info msg="Container b1d02085ad05fe2dca5fc49719ce46b528acb5c4e644e8737c269b4cd829b993: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:31.629599 containerd[1723]: time="2025-06-21T04:45:31.629573817Z" level=info msg="CreateContainer within sandbox \"d68b3d686be63e8e769c6f2f77f98c3e1cc0181804c667c6813ca018f5310f4a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1d02085ad05fe2dca5fc49719ce46b528acb5c4e644e8737c269b4cd829b993\"" Jun 21 04:45:31.629957 containerd[1723]: time="2025-06-21T04:45:31.629921276Z" level=info msg="StartContainer for \"b1d02085ad05fe2dca5fc49719ce46b528acb5c4e644e8737c269b4cd829b993\"" Jun 21 04:45:31.631320 containerd[1723]: time="2025-06-21T04:45:31.631296392Z" level=info msg="connecting to shim b1d02085ad05fe2dca5fc49719ce46b528acb5c4e644e8737c269b4cd829b993" address="unix:///run/containerd/s/9d57085f9ee77ab22af67ec64547e001d01f0496bb5a2ba7f34ef96d62b80d3f" protocol=ttrpc version=3 Jun 21 04:45:31.647367 systemd[1]: Started cri-containerd-b1d02085ad05fe2dca5fc49719ce46b528acb5c4e644e8737c269b4cd829b993.scope - libcontainer container b1d02085ad05fe2dca5fc49719ce46b528acb5c4e644e8737c269b4cd829b993. Jun 21 04:45:31.675045 containerd[1723]: time="2025-06-21T04:45:31.675002187Z" level=info msg="StartContainer for \"b1d02085ad05fe2dca5fc49719ce46b528acb5c4e644e8737c269b4cd829b993\" returns successfully" Jun 21 04:45:31.689007 containerd[1723]: time="2025-06-21T04:45:31.688938646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2wzph,Uid:c7b84089-9d19-476d-998c-297d2d0892dd,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:31.718047 containerd[1723]: time="2025-06-21T04:45:31.717989257Z" level=info msg="connecting to shim 7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14" address="unix:///run/containerd/s/03022f54f704bc6bfc3959d59b4fd77389fd1569c61f6d534f6577df425477a2" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:31.738400 systemd[1]: Started cri-containerd-7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14.scope - libcontainer container 7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14. Jun 21 04:45:31.787881 containerd[1723]: time="2025-06-21T04:45:31.787820323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2wzph,Uid:c7b84089-9d19-476d-998c-297d2d0892dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\"" Jun 21 04:45:35.378286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681776477.mount: Deactivated successfully. Jun 21 04:45:36.709089 containerd[1723]: time="2025-06-21T04:45:36.709050210Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:36.711297 containerd[1723]: time="2025-06-21T04:45:36.711266473Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 21 04:45:36.713620 containerd[1723]: time="2025-06-21T04:45:36.713584131Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:36.714502 containerd[1723]: time="2025-06-21T04:45:36.714409189Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.113287357s" Jun 21 04:45:36.714502 containerd[1723]: time="2025-06-21T04:45:36.714437491Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 21 04:45:36.715407 containerd[1723]: time="2025-06-21T04:45:36.715379482Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 21 04:45:36.716883 containerd[1723]: time="2025-06-21T04:45:36.716118520Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 04:45:36.735777 containerd[1723]: time="2025-06-21T04:45:36.735752980Z" level=info msg="Container 76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:36.748598 containerd[1723]: time="2025-06-21T04:45:36.748572355Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\"" Jun 21 04:45:36.748932 containerd[1723]: time="2025-06-21T04:45:36.748882175Z" level=info msg="StartContainer for \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\"" Jun 21 04:45:36.749893 containerd[1723]: time="2025-06-21T04:45:36.749815388Z" level=info msg="connecting to shim 76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c" address="unix:///run/containerd/s/7a84087143cb9b87c1e82ac5e7bba5ff3e089c64bd083a8fb6e9302635eedf3f" protocol=ttrpc version=3 Jun 21 04:45:36.770377 systemd[1]: Started cri-containerd-76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c.scope - libcontainer container 76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c. Jun 21 04:45:36.797197 containerd[1723]: time="2025-06-21T04:45:36.797171166Z" level=info msg="StartContainer for \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\" returns successfully" Jun 21 04:45:36.804994 systemd[1]: cri-containerd-76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c.scope: Deactivated successfully. Jun 21 04:45:36.805092 containerd[1723]: time="2025-06-21T04:45:36.805076372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\" id:\"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\" pid:3547 exited_at:{seconds:1750481136 nanos:804744678}" Jun 21 04:45:36.805336 containerd[1723]: time="2025-06-21T04:45:36.805153822Z" level=info msg="received exit event container_id:\"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\" id:\"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\" pid:3547 exited_at:{seconds:1750481136 nanos:804744678}" Jun 21 04:45:36.819464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c-rootfs.mount: Deactivated successfully. Jun 21 04:45:37.652044 kubelet[3133]: I0621 04:45:37.651784 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wfkt2" podStartSLOduration=6.651762971 podStartE2EDuration="6.651762971s" podCreationTimestamp="2025-06-21 04:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:32.638704385 +0000 UTC m=+8.130689443" watchObservedRunningTime="2025-06-21 04:45:37.651762971 +0000 UTC m=+13.143748022" Jun 21 04:45:41.143813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154733675.mount: Deactivated successfully. Jun 21 04:45:41.585610 containerd[1723]: time="2025-06-21T04:45:41.585575187Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:41.587669 containerd[1723]: time="2025-06-21T04:45:41.587632237Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 21 04:45:41.590445 containerd[1723]: time="2025-06-21T04:45:41.590407450Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:45:41.591125 containerd[1723]: time="2025-06-21T04:45:41.591049993Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.875638208s" Jun 21 04:45:41.591125 containerd[1723]: time="2025-06-21T04:45:41.591076675Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 21 04:45:41.593178 containerd[1723]: time="2025-06-21T04:45:41.593147703Z" level=info msg="CreateContainer within sandbox \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 21 04:45:41.610323 containerd[1723]: time="2025-06-21T04:45:41.609728551Z" level=info msg="Container b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:41.612320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838160756.mount: Deactivated successfully. Jun 21 04:45:41.623625 containerd[1723]: time="2025-06-21T04:45:41.623602160Z" level=info msg="CreateContainer within sandbox \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\"" Jun 21 04:45:41.624031 containerd[1723]: time="2025-06-21T04:45:41.623990043Z" level=info msg="StartContainer for \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\"" Jun 21 04:45:41.624903 containerd[1723]: time="2025-06-21T04:45:41.624857880Z" level=info msg="connecting to shim b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5" address="unix:///run/containerd/s/03022f54f704bc6bfc3959d59b4fd77389fd1569c61f6d534f6577df425477a2" protocol=ttrpc version=3 Jun 21 04:45:41.641425 systemd[1]: Started cri-containerd-b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5.scope - libcontainer container b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5. Jun 21 04:45:41.654842 containerd[1723]: time="2025-06-21T04:45:41.654716318Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 04:45:41.673180 containerd[1723]: time="2025-06-21T04:45:41.673142587Z" level=info msg="Container 40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:41.674376 containerd[1723]: time="2025-06-21T04:45:41.674352817Z" level=info msg="StartContainer for \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" returns successfully" Jun 21 04:45:41.687113 containerd[1723]: time="2025-06-21T04:45:41.687089730Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\"" Jun 21 04:45:41.687663 containerd[1723]: time="2025-06-21T04:45:41.687603951Z" level=info msg="StartContainer for \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\"" Jun 21 04:45:41.688371 containerd[1723]: time="2025-06-21T04:45:41.688240904Z" level=info msg="connecting to shim 40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307" address="unix:///run/containerd/s/7a84087143cb9b87c1e82ac5e7bba5ff3e089c64bd083a8fb6e9302635eedf3f" protocol=ttrpc version=3 Jun 21 04:45:41.709403 systemd[1]: Started cri-containerd-40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307.scope - libcontainer container 40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307. Jun 21 04:45:41.739148 containerd[1723]: time="2025-06-21T04:45:41.739124399Z" level=info msg="StartContainer for \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\" returns successfully" Jun 21 04:45:41.752500 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 04:45:41.752908 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:45:41.753022 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:45:41.754856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:45:41.755044 systemd[1]: cri-containerd-40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307.scope: Deactivated successfully. Jun 21 04:45:41.755391 containerd[1723]: time="2025-06-21T04:45:41.755368430Z" level=info msg="received exit event container_id:\"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\" id:\"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\" pid:3642 exited_at:{seconds:1750481141 nanos:755107347}" Jun 21 04:45:41.755553 containerd[1723]: time="2025-06-21T04:45:41.755534605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\" id:\"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\" pid:3642 exited_at:{seconds:1750481141 nanos:755107347}" Jun 21 04:45:41.780575 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:45:42.659042 containerd[1723]: time="2025-06-21T04:45:42.658952466Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 04:45:42.685206 containerd[1723]: time="2025-06-21T04:45:42.685170664Z" level=info msg="Container 741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:42.705067 containerd[1723]: time="2025-06-21T04:45:42.705005827Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\"" Jun 21 04:45:42.705585 containerd[1723]: time="2025-06-21T04:45:42.705564737Z" level=info msg="StartContainer for \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\"" Jun 21 04:45:42.706816 containerd[1723]: time="2025-06-21T04:45:42.706790580Z" level=info msg="connecting to shim 741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2" address="unix:///run/containerd/s/7a84087143cb9b87c1e82ac5e7bba5ff3e089c64bd083a8fb6e9302635eedf3f" protocol=ttrpc version=3 Jun 21 04:45:42.723410 systemd[1]: Started cri-containerd-741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2.scope - libcontainer container 741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2. Jun 21 04:45:42.748050 systemd[1]: cri-containerd-741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2.scope: Deactivated successfully. Jun 21 04:45:42.750928 containerd[1723]: time="2025-06-21T04:45:42.750907337Z" level=info msg="received exit event container_id:\"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\" id:\"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\" pid:3689 exited_at:{seconds:1750481142 nanos:750796116}" Jun 21 04:45:42.751519 containerd[1723]: time="2025-06-21T04:45:42.751488519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\" id:\"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\" pid:3689 exited_at:{seconds:1750481142 nanos:750796116}" Jun 21 04:45:42.753435 containerd[1723]: time="2025-06-21T04:45:42.753411436Z" level=info msg="StartContainer for \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\" returns successfully" Jun 21 04:45:42.767404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2-rootfs.mount: Deactivated successfully. Jun 21 04:45:43.665708 containerd[1723]: time="2025-06-21T04:45:43.665673246Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 04:45:43.680264 kubelet[3133]: I0621 04:45:43.680020 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2wzph" podStartSLOduration=2.877326975 podStartE2EDuration="12.680003489s" podCreationTimestamp="2025-06-21 04:45:31 +0000 UTC" firstStartedPulling="2025-06-21 04:45:31.788941003 +0000 UTC m=+7.280926059" lastFinishedPulling="2025-06-21 04:45:41.591617513 +0000 UTC m=+17.083602573" observedRunningTime="2025-06-21 04:45:42.692628645 +0000 UTC m=+18.184613704" watchObservedRunningTime="2025-06-21 04:45:43.680003489 +0000 UTC m=+19.171988546" Jun 21 04:45:43.691146 containerd[1723]: time="2025-06-21T04:45:43.689447648Z" level=info msg="Container 89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:43.702151 containerd[1723]: time="2025-06-21T04:45:43.702128308Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\"" Jun 21 04:45:43.703244 containerd[1723]: time="2025-06-21T04:45:43.702475799Z" level=info msg="StartContainer for \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\"" Jun 21 04:45:43.703511 containerd[1723]: time="2025-06-21T04:45:43.703467300Z" level=info msg="connecting to shim 89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b" address="unix:///run/containerd/s/7a84087143cb9b87c1e82ac5e7bba5ff3e089c64bd083a8fb6e9302635eedf3f" protocol=ttrpc version=3 Jun 21 04:45:43.726393 systemd[1]: Started cri-containerd-89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b.scope - libcontainer container 89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b. Jun 21 04:45:43.746663 systemd[1]: cri-containerd-89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b.scope: Deactivated successfully. Jun 21 04:45:43.747876 containerd[1723]: time="2025-06-21T04:45:43.747839197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\" id:\"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\" pid:3732 exited_at:{seconds:1750481143 nanos:747576051}" Jun 21 04:45:43.751268 containerd[1723]: time="2025-06-21T04:45:43.750831047Z" level=info msg="received exit event container_id:\"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\" id:\"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\" pid:3732 exited_at:{seconds:1750481143 nanos:747576051}" Jun 21 04:45:43.752769 containerd[1723]: time="2025-06-21T04:45:43.752733986Z" level=info msg="StartContainer for \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\" returns successfully" Jun 21 04:45:43.766277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b-rootfs.mount: Deactivated successfully. Jun 21 04:45:44.671156 containerd[1723]: time="2025-06-21T04:45:44.670884863Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 04:45:44.694372 containerd[1723]: time="2025-06-21T04:45:44.693703450Z" level=info msg="Container 58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:44.704088 containerd[1723]: time="2025-06-21T04:45:44.704061674Z" level=info msg="CreateContainer within sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\"" Jun 21 04:45:44.704465 containerd[1723]: time="2025-06-21T04:45:44.704447691Z" level=info msg="StartContainer for \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\"" Jun 21 04:45:44.705467 containerd[1723]: time="2025-06-21T04:45:44.705442272Z" level=info msg="connecting to shim 58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b" address="unix:///run/containerd/s/7a84087143cb9b87c1e82ac5e7bba5ff3e089c64bd083a8fb6e9302635eedf3f" protocol=ttrpc version=3 Jun 21 04:45:44.730389 systemd[1]: Started cri-containerd-58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b.scope - libcontainer container 58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b. Jun 21 04:45:44.756662 containerd[1723]: time="2025-06-21T04:45:44.756638624Z" level=info msg="StartContainer for \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" returns successfully" Jun 21 04:45:44.812087 containerd[1723]: time="2025-06-21T04:45:44.812068287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" id:\"5681c4bd8fdc7d54e984acf71dd0499597e70da46c5d7758643d6e35bc00c816\" pid:3803 exited_at:{seconds:1750481144 nanos:810977393}" Jun 21 04:45:44.813884 kubelet[3133]: I0621 04:45:44.813866 3133 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 04:45:44.847009 systemd[1]: Created slice kubepods-burstable-poddfd5db89_b68d_40b5_9f88_6ad3588ba0e2.slice - libcontainer container kubepods-burstable-poddfd5db89_b68d_40b5_9f88_6ad3588ba0e2.slice. Jun 21 04:45:44.855011 systemd[1]: Created slice kubepods-burstable-pod116e46b9_7d76_4fc4_a25e_7d52fe339441.slice - libcontainer container kubepods-burstable-pod116e46b9_7d76_4fc4_a25e_7d52fe339441.slice. Jun 21 04:45:44.912676 kubelet[3133]: I0621 04:45:44.912650 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t464t\" (UniqueName: \"kubernetes.io/projected/dfd5db89-b68d-40b5-9f88-6ad3588ba0e2-kube-api-access-t464t\") pod \"coredns-668d6bf9bc-mtvpp\" (UID: \"dfd5db89-b68d-40b5-9f88-6ad3588ba0e2\") " pod="kube-system/coredns-668d6bf9bc-mtvpp" Jun 21 04:45:44.912756 kubelet[3133]: I0621 04:45:44.912686 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qm84\" (UniqueName: \"kubernetes.io/projected/116e46b9-7d76-4fc4-a25e-7d52fe339441-kube-api-access-6qm84\") pod \"coredns-668d6bf9bc-lmls8\" (UID: \"116e46b9-7d76-4fc4-a25e-7d52fe339441\") " pod="kube-system/coredns-668d6bf9bc-lmls8" Jun 21 04:45:44.912756 kubelet[3133]: I0621 04:45:44.912705 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd5db89-b68d-40b5-9f88-6ad3588ba0e2-config-volume\") pod \"coredns-668d6bf9bc-mtvpp\" (UID: \"dfd5db89-b68d-40b5-9f88-6ad3588ba0e2\") " pod="kube-system/coredns-668d6bf9bc-mtvpp" Jun 21 04:45:44.912756 kubelet[3133]: I0621 04:45:44.912720 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/116e46b9-7d76-4fc4-a25e-7d52fe339441-config-volume\") pod \"coredns-668d6bf9bc-lmls8\" (UID: \"116e46b9-7d76-4fc4-a25e-7d52fe339441\") " pod="kube-system/coredns-668d6bf9bc-lmls8" Jun 21 04:45:45.151865 containerd[1723]: time="2025-06-21T04:45:45.151819267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtvpp,Uid:dfd5db89-b68d-40b5-9f88-6ad3588ba0e2,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:45.162469 containerd[1723]: time="2025-06-21T04:45:45.162426569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lmls8,Uid:116e46b9-7d76-4fc4-a25e-7d52fe339441,Namespace:kube-system,Attempt:0,}" Jun 21 04:45:45.686633 kubelet[3133]: I0621 04:45:45.686513 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rmmlm" podStartSLOduration=9.572172427 podStartE2EDuration="14.686495729s" podCreationTimestamp="2025-06-21 04:45:31 +0000 UTC" firstStartedPulling="2025-06-21 04:45:31.600799588 +0000 UTC m=+7.092784641" lastFinishedPulling="2025-06-21 04:45:36.715122886 +0000 UTC m=+12.207107943" observedRunningTime="2025-06-21 04:45:45.686298942 +0000 UTC m=+21.178284008" watchObservedRunningTime="2025-06-21 04:45:45.686495729 +0000 UTC m=+21.178480834" Jun 21 04:45:46.813175 systemd-networkd[1365]: cilium_host: Link UP Jun 21 04:45:46.815310 systemd-networkd[1365]: cilium_net: Link UP Jun 21 04:45:46.815499 systemd-networkd[1365]: cilium_net: Gained carrier Jun 21 04:45:46.815604 systemd-networkd[1365]: cilium_host: Gained carrier Jun 21 04:45:46.881314 systemd-networkd[1365]: cilium_net: Gained IPv6LL Jun 21 04:45:46.928118 systemd-networkd[1365]: cilium_vxlan: Link UP Jun 21 04:45:46.928194 systemd-networkd[1365]: cilium_vxlan: Gained carrier Jun 21 04:45:47.233274 kernel: NET: Registered PF_ALG protocol family Jun 21 04:45:47.625433 systemd-networkd[1365]: cilium_host: Gained IPv6LL Jun 21 04:45:47.885472 systemd-networkd[1365]: lxc_health: Link UP Jun 21 04:45:47.886679 systemd-networkd[1365]: lxc_health: Gained carrier Jun 21 04:45:48.009328 systemd-networkd[1365]: cilium_vxlan: Gained IPv6LL Jun 21 04:45:48.178790 systemd-networkd[1365]: lxc4555dd4216f4: Link UP Jun 21 04:45:48.186279 kernel: eth0: renamed from tmp85ba6 Jun 21 04:45:48.188357 systemd-networkd[1365]: lxc4555dd4216f4: Gained carrier Jun 21 04:45:48.205337 systemd-networkd[1365]: lxc7232016bd172: Link UP Jun 21 04:45:48.209274 kernel: eth0: renamed from tmpc05ce Jun 21 04:45:48.211898 systemd-networkd[1365]: lxc7232016bd172: Gained carrier Jun 21 04:45:49.609386 systemd-networkd[1365]: lxc_health: Gained IPv6LL Jun 21 04:45:49.737364 systemd-networkd[1365]: lxc7232016bd172: Gained IPv6LL Jun 21 04:45:50.057452 systemd-networkd[1365]: lxc4555dd4216f4: Gained IPv6LL Jun 21 04:45:50.791538 containerd[1723]: time="2025-06-21T04:45:50.791494301Z" level=info msg="connecting to shim c05ce14bd7227584884dc2b543fb77526fd4ee36d426a9b64420898ec0dca6cf" address="unix:///run/containerd/s/e79602f93ca9a44cd4e5adaaa570f17a9f4013143159c21397d3045e67908822" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:50.804727 containerd[1723]: time="2025-06-21T04:45:50.804675349Z" level=info msg="connecting to shim 85ba61248a1a496604439b0b9f98a3ffecace1938adab8231c301cf8adf10b3c" address="unix:///run/containerd/s/9af039757573b8fcb8dcf38d7c0203c18184986cce5b651bba0c6573e94fd8d2" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:45:50.828407 systemd[1]: Started cri-containerd-c05ce14bd7227584884dc2b543fb77526fd4ee36d426a9b64420898ec0dca6cf.scope - libcontainer container c05ce14bd7227584884dc2b543fb77526fd4ee36d426a9b64420898ec0dca6cf. Jun 21 04:45:50.831863 systemd[1]: Started cri-containerd-85ba61248a1a496604439b0b9f98a3ffecace1938adab8231c301cf8adf10b3c.scope - libcontainer container 85ba61248a1a496604439b0b9f98a3ffecace1938adab8231c301cf8adf10b3c. Jun 21 04:45:50.886327 containerd[1723]: time="2025-06-21T04:45:50.886244079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lmls8,Uid:116e46b9-7d76-4fc4-a25e-7d52fe339441,Namespace:kube-system,Attempt:0,} returns sandbox id \"c05ce14bd7227584884dc2b543fb77526fd4ee36d426a9b64420898ec0dca6cf\"" Jun 21 04:45:50.888374 containerd[1723]: time="2025-06-21T04:45:50.888347032Z" level=info msg="CreateContainer within sandbox \"c05ce14bd7227584884dc2b543fb77526fd4ee36d426a9b64420898ec0dca6cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 04:45:50.890363 containerd[1723]: time="2025-06-21T04:45:50.890338504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtvpp,Uid:dfd5db89-b68d-40b5-9f88-6ad3588ba0e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"85ba61248a1a496604439b0b9f98a3ffecace1938adab8231c301cf8adf10b3c\"" Jun 21 04:45:50.892080 containerd[1723]: time="2025-06-21T04:45:50.892058690Z" level=info msg="CreateContainer within sandbox \"85ba61248a1a496604439b0b9f98a3ffecace1938adab8231c301cf8adf10b3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 04:45:50.905959 containerd[1723]: time="2025-06-21T04:45:50.905472389Z" level=info msg="Container baac2a4b325345d8b9dd68f2063b8514d579b33b667f2915892cdab33e371716: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:50.916638 containerd[1723]: time="2025-06-21T04:45:50.916612026Z" level=info msg="Container 2980bd46b163065cab3f64ae3f548e32981ac13348509a83605cdb2a9979c2c1: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:45:50.925127 containerd[1723]: time="2025-06-21T04:45:50.925104332Z" level=info msg="CreateContainer within sandbox \"c05ce14bd7227584884dc2b543fb77526fd4ee36d426a9b64420898ec0dca6cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baac2a4b325345d8b9dd68f2063b8514d579b33b667f2915892cdab33e371716\"" Jun 21 04:45:50.926271 containerd[1723]: time="2025-06-21T04:45:50.925461626Z" level=info msg="StartContainer for \"baac2a4b325345d8b9dd68f2063b8514d579b33b667f2915892cdab33e371716\"" Jun 21 04:45:50.926271 containerd[1723]: time="2025-06-21T04:45:50.926102464Z" level=info msg="connecting to shim baac2a4b325345d8b9dd68f2063b8514d579b33b667f2915892cdab33e371716" address="unix:///run/containerd/s/e79602f93ca9a44cd4e5adaaa570f17a9f4013143159c21397d3045e67908822" protocol=ttrpc version=3 Jun 21 04:45:50.934382 containerd[1723]: time="2025-06-21T04:45:50.934336299Z" level=info msg="CreateContainer within sandbox \"85ba61248a1a496604439b0b9f98a3ffecace1938adab8231c301cf8adf10b3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2980bd46b163065cab3f64ae3f548e32981ac13348509a83605cdb2a9979c2c1\"" Jun 21 04:45:50.935401 containerd[1723]: time="2025-06-21T04:45:50.935370841Z" level=info msg="StartContainer for \"2980bd46b163065cab3f64ae3f548e32981ac13348509a83605cdb2a9979c2c1\"" Jun 21 04:45:50.936003 containerd[1723]: time="2025-06-21T04:45:50.935979080Z" level=info msg="connecting to shim 2980bd46b163065cab3f64ae3f548e32981ac13348509a83605cdb2a9979c2c1" address="unix:///run/containerd/s/9af039757573b8fcb8dcf38d7c0203c18184986cce5b651bba0c6573e94fd8d2" protocol=ttrpc version=3 Jun 21 04:45:50.942479 systemd[1]: Started cri-containerd-baac2a4b325345d8b9dd68f2063b8514d579b33b667f2915892cdab33e371716.scope - libcontainer container baac2a4b325345d8b9dd68f2063b8514d579b33b667f2915892cdab33e371716. Jun 21 04:45:50.954432 systemd[1]: Started cri-containerd-2980bd46b163065cab3f64ae3f548e32981ac13348509a83605cdb2a9979c2c1.scope - libcontainer container 2980bd46b163065cab3f64ae3f548e32981ac13348509a83605cdb2a9979c2c1. Jun 21 04:45:50.975865 containerd[1723]: time="2025-06-21T04:45:50.975795581Z" level=info msg="StartContainer for \"baac2a4b325345d8b9dd68f2063b8514d579b33b667f2915892cdab33e371716\" returns successfully" Jun 21 04:45:50.995425 containerd[1723]: time="2025-06-21T04:45:50.995318001Z" level=info msg="StartContainer for \"2980bd46b163065cab3f64ae3f548e32981ac13348509a83605cdb2a9979c2c1\" returns successfully" Jun 21 04:45:51.698081 kubelet[3133]: I0621 04:45:51.697901 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lmls8" podStartSLOduration=20.697883311 podStartE2EDuration="20.697883311s" podCreationTimestamp="2025-06-21 04:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:51.697664098 +0000 UTC m=+27.189649156" watchObservedRunningTime="2025-06-21 04:45:51.697883311 +0000 UTC m=+27.189868372" Jun 21 04:45:51.712851 kubelet[3133]: I0621 04:45:51.712801 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mtvpp" podStartSLOduration=20.712784659 podStartE2EDuration="20.712784659s" podCreationTimestamp="2025-06-21 04:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:45:51.709810467 +0000 UTC m=+27.201795530" watchObservedRunningTime="2025-06-21 04:45:51.712784659 +0000 UTC m=+27.204769719" Jun 21 04:45:58.737849 kubelet[3133]: I0621 04:45:58.737727 3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 04:47:05.352218 systemd[1]: Started sshd@7-10.200.8.45:22-10.200.16.10:49898.service - OpenSSH per-connection server daemon (10.200.16.10:49898). Jun 21 04:47:05.989553 sshd[4450]: Accepted publickey for core from 10.200.16.10 port 49898 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:05.990532 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:05.994550 systemd-logind[1702]: New session 10 of user core. Jun 21 04:47:06.002469 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 04:47:06.497642 sshd[4452]: Connection closed by 10.200.16.10 port 49898 Jun 21 04:47:06.498026 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:06.501370 systemd[1]: sshd@7-10.200.8.45:22-10.200.16.10:49898.service: Deactivated successfully. Jun 21 04:47:06.502995 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 04:47:06.503805 systemd-logind[1702]: Session 10 logged out. Waiting for processes to exit. Jun 21 04:47:06.504798 systemd-logind[1702]: Removed session 10. Jun 21 04:47:11.613357 systemd[1]: Started sshd@8-10.200.8.45:22-10.200.16.10:51592.service - OpenSSH per-connection server daemon (10.200.16.10:51592). Jun 21 04:47:12.240890 sshd[4465]: Accepted publickey for core from 10.200.16.10 port 51592 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:12.242123 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:12.245580 systemd-logind[1702]: New session 11 of user core. Jun 21 04:47:12.252393 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 04:47:12.725105 sshd[4467]: Connection closed by 10.200.16.10 port 51592 Jun 21 04:47:12.725538 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:12.727715 systemd[1]: sshd@8-10.200.8.45:22-10.200.16.10:51592.service: Deactivated successfully. Jun 21 04:47:12.729177 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 04:47:12.730818 systemd-logind[1702]: Session 11 logged out. Waiting for processes to exit. Jun 21 04:47:12.731609 systemd-logind[1702]: Removed session 11. Jun 21 04:47:17.839072 systemd[1]: Started sshd@9-10.200.8.45:22-10.200.16.10:51598.service - OpenSSH per-connection server daemon (10.200.16.10:51598). Jun 21 04:47:18.462799 sshd[4480]: Accepted publickey for core from 10.200.16.10 port 51598 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:18.464108 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:18.468190 systemd-logind[1702]: New session 12 of user core. Jun 21 04:47:18.472424 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 04:47:18.946750 sshd[4482]: Connection closed by 10.200.16.10 port 51598 Jun 21 04:47:18.947194 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:18.950509 systemd[1]: sshd@9-10.200.8.45:22-10.200.16.10:51598.service: Deactivated successfully. Jun 21 04:47:18.952769 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 04:47:18.954361 systemd-logind[1702]: Session 12 logged out. Waiting for processes to exit. Jun 21 04:47:18.956516 systemd-logind[1702]: Removed session 12. Jun 21 04:47:24.057786 systemd[1]: Started sshd@10-10.200.8.45:22-10.200.16.10:36968.service - OpenSSH per-connection server daemon (10.200.16.10:36968). Jun 21 04:47:24.687734 sshd[4495]: Accepted publickey for core from 10.200.16.10 port 36968 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:24.688783 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:24.692789 systemd-logind[1702]: New session 13 of user core. Jun 21 04:47:24.699444 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 04:47:25.172947 sshd[4499]: Connection closed by 10.200.16.10 port 36968 Jun 21 04:47:25.173399 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:25.176160 systemd[1]: sshd@10-10.200.8.45:22-10.200.16.10:36968.service: Deactivated successfully. Jun 21 04:47:25.177826 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 04:47:25.178603 systemd-logind[1702]: Session 13 logged out. Waiting for processes to exit. Jun 21 04:47:25.179723 systemd-logind[1702]: Removed session 13. Jun 21 04:47:25.288382 systemd[1]: Started sshd@11-10.200.8.45:22-10.200.16.10:36980.service - OpenSSH per-connection server daemon (10.200.16.10:36980). Jun 21 04:47:25.924029 sshd[4512]: Accepted publickey for core from 10.200.16.10 port 36980 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:25.924966 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:25.928637 systemd-logind[1702]: New session 14 of user core. Jun 21 04:47:25.936405 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 04:47:26.442519 sshd[4514]: Connection closed by 10.200.16.10 port 36980 Jun 21 04:47:26.442953 sshd-session[4512]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:26.445511 systemd[1]: sshd@11-10.200.8.45:22-10.200.16.10:36980.service: Deactivated successfully. Jun 21 04:47:26.446954 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 04:47:26.447709 systemd-logind[1702]: Session 14 logged out. Waiting for processes to exit. Jun 21 04:47:26.448835 systemd-logind[1702]: Removed session 14. Jun 21 04:47:26.554638 systemd[1]: Started sshd@12-10.200.8.45:22-10.200.16.10:36992.service - OpenSSH per-connection server daemon (10.200.16.10:36992). Jun 21 04:47:27.177934 sshd[4523]: Accepted publickey for core from 10.200.16.10 port 36992 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:27.179013 sshd-session[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:27.182915 systemd-logind[1702]: New session 15 of user core. Jun 21 04:47:27.188401 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 04:47:27.663118 sshd[4525]: Connection closed by 10.200.16.10 port 36992 Jun 21 04:47:27.663569 sshd-session[4523]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:27.665799 systemd[1]: sshd@12-10.200.8.45:22-10.200.16.10:36992.service: Deactivated successfully. Jun 21 04:47:27.667348 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 04:47:27.668553 systemd-logind[1702]: Session 15 logged out. Waiting for processes to exit. Jun 21 04:47:27.669836 systemd-logind[1702]: Removed session 15. Jun 21 04:47:32.774962 systemd[1]: Started sshd@13-10.200.8.45:22-10.200.16.10:49270.service - OpenSSH per-connection server daemon (10.200.16.10:49270). Jun 21 04:47:33.404741 sshd[4540]: Accepted publickey for core from 10.200.16.10 port 49270 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:33.405994 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:33.410246 systemd-logind[1702]: New session 16 of user core. Jun 21 04:47:33.417394 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 04:47:33.890779 sshd[4542]: Connection closed by 10.200.16.10 port 49270 Jun 21 04:47:33.891168 sshd-session[4540]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:33.893816 systemd[1]: sshd@13-10.200.8.45:22-10.200.16.10:49270.service: Deactivated successfully. Jun 21 04:47:33.895413 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 04:47:33.896171 systemd-logind[1702]: Session 16 logged out. Waiting for processes to exit. Jun 21 04:47:33.897167 systemd-logind[1702]: Removed session 16. Jun 21 04:47:39.004062 systemd[1]: Started sshd@14-10.200.8.45:22-10.200.16.10:43486.service - OpenSSH per-connection server daemon (10.200.16.10:43486). Jun 21 04:47:39.638242 sshd[4554]: Accepted publickey for core from 10.200.16.10 port 43486 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:39.639572 sshd-session[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:39.643963 systemd-logind[1702]: New session 17 of user core. Jun 21 04:47:39.648374 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 04:47:40.126187 sshd[4556]: Connection closed by 10.200.16.10 port 43486 Jun 21 04:47:40.126589 sshd-session[4554]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:40.129275 systemd[1]: sshd@14-10.200.8.45:22-10.200.16.10:43486.service: Deactivated successfully. Jun 21 04:47:40.130899 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 04:47:40.131658 systemd-logind[1702]: Session 17 logged out. Waiting for processes to exit. Jun 21 04:47:40.132715 systemd-logind[1702]: Removed session 17. Jun 21 04:47:40.244660 systemd[1]: Started sshd@15-10.200.8.45:22-10.200.16.10:43488.service - OpenSSH per-connection server daemon (10.200.16.10:43488). Jun 21 04:47:40.870088 sshd[4568]: Accepted publickey for core from 10.200.16.10 port 43488 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:40.872224 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:40.876155 systemd-logind[1702]: New session 18 of user core. Jun 21 04:47:40.885357 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 04:47:41.443595 sshd[4570]: Connection closed by 10.200.16.10 port 43488 Jun 21 04:47:41.444070 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:41.447172 systemd[1]: sshd@15-10.200.8.45:22-10.200.16.10:43488.service: Deactivated successfully. Jun 21 04:47:41.448628 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 04:47:41.449439 systemd-logind[1702]: Session 18 logged out. Waiting for processes to exit. Jun 21 04:47:41.450427 systemd-logind[1702]: Removed session 18. Jun 21 04:47:41.555892 systemd[1]: Started sshd@16-10.200.8.45:22-10.200.16.10:43496.service - OpenSSH per-connection server daemon (10.200.16.10:43496). Jun 21 04:47:42.182370 sshd[4580]: Accepted publickey for core from 10.200.16.10 port 43496 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:42.183229 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:42.187004 systemd-logind[1702]: New session 19 of user core. Jun 21 04:47:42.193373 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 04:47:43.392125 sshd[4582]: Connection closed by 10.200.16.10 port 43496 Jun 21 04:47:43.392615 sshd-session[4580]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:43.395675 systemd[1]: sshd@16-10.200.8.45:22-10.200.16.10:43496.service: Deactivated successfully. Jun 21 04:47:43.397450 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 04:47:43.398193 systemd-logind[1702]: Session 19 logged out. Waiting for processes to exit. Jun 21 04:47:43.399508 systemd-logind[1702]: Removed session 19. Jun 21 04:47:43.503106 systemd[1]: Started sshd@17-10.200.8.45:22-10.200.16.10:43502.service - OpenSSH per-connection server daemon (10.200.16.10:43502). Jun 21 04:47:44.141390 sshd[4599]: Accepted publickey for core from 10.200.16.10 port 43502 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:44.142481 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:44.146833 systemd-logind[1702]: New session 20 of user core. Jun 21 04:47:44.153408 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 04:47:44.701007 sshd[4601]: Connection closed by 10.200.16.10 port 43502 Jun 21 04:47:44.701507 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:44.704401 systemd[1]: sshd@17-10.200.8.45:22-10.200.16.10:43502.service: Deactivated successfully. Jun 21 04:47:44.706123 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 04:47:44.706934 systemd-logind[1702]: Session 20 logged out. Waiting for processes to exit. Jun 21 04:47:44.708042 systemd-logind[1702]: Removed session 20. Jun 21 04:47:44.822707 systemd[1]: Started sshd@18-10.200.8.45:22-10.200.16.10:43504.service - OpenSSH per-connection server daemon (10.200.16.10:43504). Jun 21 04:47:45.450759 sshd[4611]: Accepted publickey for core from 10.200.16.10 port 43504 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:45.451907 sshd-session[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:45.455987 systemd-logind[1702]: New session 21 of user core. Jun 21 04:47:45.466393 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 04:47:45.930625 sshd[4613]: Connection closed by 10.200.16.10 port 43504 Jun 21 04:47:45.933081 sshd-session[4611]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:45.935567 systemd[1]: sshd@18-10.200.8.45:22-10.200.16.10:43504.service: Deactivated successfully. Jun 21 04:47:45.937222 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 04:47:45.937883 systemd-logind[1702]: Session 21 logged out. Waiting for processes to exit. Jun 21 04:47:45.939066 systemd-logind[1702]: Removed session 21. Jun 21 04:47:51.049157 systemd[1]: Started sshd@19-10.200.8.45:22-10.200.16.10:43602.service - OpenSSH per-connection server daemon (10.200.16.10:43602). Jun 21 04:47:51.729608 sshd[4626]: Accepted publickey for core from 10.200.16.10 port 43602 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:51.730799 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:51.735115 systemd-logind[1702]: New session 22 of user core. Jun 21 04:47:51.740403 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 04:47:52.240081 sshd[4628]: Connection closed by 10.200.16.10 port 43602 Jun 21 04:47:52.240359 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:52.243428 systemd[1]: sshd@19-10.200.8.45:22-10.200.16.10:43602.service: Deactivated successfully. Jun 21 04:47:52.245050 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 04:47:52.245841 systemd-logind[1702]: Session 22 logged out. Waiting for processes to exit. Jun 21 04:47:52.247030 systemd-logind[1702]: Removed session 22. Jun 21 04:47:57.351940 systemd[1]: Started sshd@20-10.200.8.45:22-10.200.16.10:43618.service - OpenSSH per-connection server daemon (10.200.16.10:43618). Jun 21 04:47:57.979830 sshd[4640]: Accepted publickey for core from 10.200.16.10 port 43618 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:57.981425 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:57.984988 systemd-logind[1702]: New session 23 of user core. Jun 21 04:47:57.990386 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 04:47:58.462852 sshd[4642]: Connection closed by 10.200.16.10 port 43618 Jun 21 04:47:58.463264 sshd-session[4640]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:58.465367 systemd[1]: sshd@20-10.200.8.45:22-10.200.16.10:43618.service: Deactivated successfully. Jun 21 04:47:58.466922 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 04:47:58.468580 systemd-logind[1702]: Session 23 logged out. Waiting for processes to exit. Jun 21 04:47:58.469417 systemd-logind[1702]: Removed session 23. Jun 21 04:48:03.589792 systemd[1]: Started sshd@21-10.200.8.45:22-10.200.16.10:60454.service - OpenSSH per-connection server daemon (10.200.16.10:60454). Jun 21 04:48:04.223177 sshd[4656]: Accepted publickey for core from 10.200.16.10 port 60454 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:04.224503 sshd-session[4656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:04.229314 systemd-logind[1702]: New session 24 of user core. Jun 21 04:48:04.234387 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 04:48:04.709211 sshd[4658]: Connection closed by 10.200.16.10 port 60454 Jun 21 04:48:04.709764 sshd-session[4656]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:04.712513 systemd[1]: sshd@21-10.200.8.45:22-10.200.16.10:60454.service: Deactivated successfully. Jun 21 04:48:04.714177 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 04:48:04.715425 systemd-logind[1702]: Session 24 logged out. Waiting for processes to exit. Jun 21 04:48:04.716642 systemd-logind[1702]: Removed session 24. Jun 21 04:48:04.828135 systemd[1]: Started sshd@22-10.200.8.45:22-10.200.16.10:60468.service - OpenSSH per-connection server daemon (10.200.16.10:60468). Jun 21 04:48:05.456985 sshd[4670]: Accepted publickey for core from 10.200.16.10 port 60468 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:05.458299 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:05.463037 systemd-logind[1702]: New session 25 of user core. Jun 21 04:48:05.470394 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 04:48:07.090156 containerd[1723]: time="2025-06-21T04:48:07.090112530Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 04:48:07.094903 containerd[1723]: time="2025-06-21T04:48:07.094801769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" id:\"61bd171ebfc215e060227105c622dde0adf8309d05a323f03f13332085d5e126\" pid:4692 exited_at:{seconds:1750481287 nanos:94152008}" Jun 21 04:48:07.096556 containerd[1723]: time="2025-06-21T04:48:07.096442896Z" level=info msg="StopContainer for \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" with timeout 30 (s)" Jun 21 04:48:07.097344 containerd[1723]: time="2025-06-21T04:48:07.097323768Z" level=info msg="Stop container \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" with signal terminated" Jun 21 04:48:07.098499 containerd[1723]: time="2025-06-21T04:48:07.098337268Z" level=info msg="StopContainer for \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" with timeout 2 (s)" Jun 21 04:48:07.100657 containerd[1723]: time="2025-06-21T04:48:07.100630921Z" level=info msg="Stop container \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" with signal terminated" Jun 21 04:48:07.113345 systemd-networkd[1365]: lxc_health: Link DOWN Jun 21 04:48:07.113353 systemd-networkd[1365]: lxc_health: Lost carrier Jun 21 04:48:07.116944 systemd[1]: cri-containerd-b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5.scope: Deactivated successfully. Jun 21 04:48:07.119651 containerd[1723]: time="2025-06-21T04:48:07.119524743Z" level=info msg="received exit event container_id:\"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" id:\"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" pid:3609 exited_at:{seconds:1750481287 nanos:119241337}" Jun 21 04:48:07.119973 containerd[1723]: time="2025-06-21T04:48:07.119948510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" id:\"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" pid:3609 exited_at:{seconds:1750481287 nanos:119241337}" Jun 21 04:48:07.126303 systemd[1]: cri-containerd-58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b.scope: Deactivated successfully. Jun 21 04:48:07.126558 systemd[1]: cri-containerd-58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b.scope: Consumed 4.755s CPU time, 123.6M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 04:48:07.129852 containerd[1723]: time="2025-06-21T04:48:07.129811232Z" level=info msg="received exit event container_id:\"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" id:\"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" pid:3773 exited_at:{seconds:1750481287 nanos:129639491}" Jun 21 04:48:07.130051 containerd[1723]: time="2025-06-21T04:48:07.130031072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" id:\"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" pid:3773 exited_at:{seconds:1750481287 nanos:129639491}" Jun 21 04:48:07.144314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5-rootfs.mount: Deactivated successfully. Jun 21 04:48:07.149907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b-rootfs.mount: Deactivated successfully. Jun 21 04:48:07.198437 containerd[1723]: time="2025-06-21T04:48:07.198416333Z" level=info msg="StopContainer for \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" returns successfully" Jun 21 04:48:07.199051 containerd[1723]: time="2025-06-21T04:48:07.199027150Z" level=info msg="StopPodSandbox for \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\"" Jun 21 04:48:07.199105 containerd[1723]: time="2025-06-21T04:48:07.199075879Z" level=info msg="Container to stop \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:07.199105 containerd[1723]: time="2025-06-21T04:48:07.199086988Z" level=info msg="Container to stop \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:07.199105 containerd[1723]: time="2025-06-21T04:48:07.199095228Z" level=info msg="Container to stop \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:07.199105 containerd[1723]: time="2025-06-21T04:48:07.199103414Z" level=info msg="Container to stop \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:07.199192 containerd[1723]: time="2025-06-21T04:48:07.199111345Z" level=info msg="Container to stop \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:07.201050 containerd[1723]: time="2025-06-21T04:48:07.201025745Z" level=info msg="StopContainer for \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" returns successfully" Jun 21 04:48:07.201951 containerd[1723]: time="2025-06-21T04:48:07.201657448Z" level=info msg="StopPodSandbox for \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\"" Jun 21 04:48:07.201951 containerd[1723]: time="2025-06-21T04:48:07.201793683Z" level=info msg="Container to stop \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:07.205439 systemd[1]: cri-containerd-f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875.scope: Deactivated successfully. Jun 21 04:48:07.206526 containerd[1723]: time="2025-06-21T04:48:07.206506623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" id:\"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" pid:3280 exit_status:137 exited_at:{seconds:1750481287 nanos:206003294}" Jun 21 04:48:07.210892 systemd[1]: cri-containerd-7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14.scope: Deactivated successfully. Jun 21 04:48:07.234302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875-rootfs.mount: Deactivated successfully. Jun 21 04:48:07.236968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14-rootfs.mount: Deactivated successfully. Jun 21 04:48:07.248959 containerd[1723]: time="2025-06-21T04:48:07.248865951Z" level=info msg="shim disconnected" id=7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14 namespace=k8s.io Jun 21 04:48:07.248959 containerd[1723]: time="2025-06-21T04:48:07.248887644Z" level=warning msg="cleaning up after shim disconnected" id=7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14 namespace=k8s.io Jun 21 04:48:07.248959 containerd[1723]: time="2025-06-21T04:48:07.248894990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 04:48:07.250405 containerd[1723]: time="2025-06-21T04:48:07.249285970Z" level=info msg="received exit event sandbox_id:\"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" exit_status:137 exited_at:{seconds:1750481287 nanos:206003294}" Jun 21 04:48:07.253030 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875-shm.mount: Deactivated successfully. Jun 21 04:48:07.254677 containerd[1723]: time="2025-06-21T04:48:07.254653391Z" level=info msg="shim disconnected" id=f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875 namespace=k8s.io Jun 21 04:48:07.254745 containerd[1723]: time="2025-06-21T04:48:07.254689093Z" level=warning msg="cleaning up after shim disconnected" id=f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875 namespace=k8s.io Jun 21 04:48:07.254745 containerd[1723]: time="2025-06-21T04:48:07.254697412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 04:48:07.255086 containerd[1723]: time="2025-06-21T04:48:07.255069478Z" level=info msg="TearDown network for sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" successfully" Jun 21 04:48:07.255634 containerd[1723]: time="2025-06-21T04:48:07.255614380Z" level=info msg="StopPodSandbox for \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" returns successfully" Jun 21 04:48:07.270749 containerd[1723]: time="2025-06-21T04:48:07.270189912Z" level=info msg="received exit event sandbox_id:\"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" exit_status:137 exited_at:{seconds:1750481287 nanos:212625700}" Jun 21 04:48:07.270749 containerd[1723]: time="2025-06-21T04:48:07.270660947Z" level=info msg="TearDown network for sandbox \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" successfully" Jun 21 04:48:07.270749 containerd[1723]: time="2025-06-21T04:48:07.270687011Z" level=info msg="StopPodSandbox for \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" returns successfully" Jun 21 04:48:07.275141 containerd[1723]: time="2025-06-21T04:48:07.275050886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" id:\"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" pid:3368 exit_status:137 exited_at:{seconds:1750481287 nanos:212625700}" Jun 21 04:48:07.371292 kubelet[3133]: I0621 04:48:07.370296 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-bpf-maps\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371292 kubelet[3133]: I0621 04:48:07.370326 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-xtables-lock\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371292 kubelet[3133]: I0621 04:48:07.370349 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6963d713-8ef4-402e-87e8-357650a64194-clustermesh-secrets\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371292 kubelet[3133]: I0621 04:48:07.370367 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6cp4\" (UniqueName: \"kubernetes.io/projected/c7b84089-9d19-476d-998c-297d2d0892dd-kube-api-access-m6cp4\") pod \"c7b84089-9d19-476d-998c-297d2d0892dd\" (UID: \"c7b84089-9d19-476d-998c-297d2d0892dd\") " Jun 21 04:48:07.371292 kubelet[3133]: I0621 04:48:07.370376 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.371292 kubelet[3133]: I0621 04:48:07.370386 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7b84089-9d19-476d-998c-297d2d0892dd-cilium-config-path\") pod \"c7b84089-9d19-476d-998c-297d2d0892dd\" (UID: \"c7b84089-9d19-476d-998c-297d2d0892dd\") " Jun 21 04:48:07.371709 kubelet[3133]: I0621 04:48:07.370401 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6963d713-8ef4-402e-87e8-357650a64194-hubble-tls\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371709 kubelet[3133]: I0621 04:48:07.370416 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cni-path\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371709 kubelet[3133]: I0621 04:48:07.370430 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-hostproc\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371709 kubelet[3133]: I0621 04:48:07.370447 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cilium-cgroup\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371709 kubelet[3133]: I0621 04:48:07.370462 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-host-proc-sys-net\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371709 kubelet[3133]: I0621 04:48:07.370479 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zhcr\" (UniqueName: \"kubernetes.io/projected/6963d713-8ef4-402e-87e8-357650a64194-kube-api-access-9zhcr\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371855 kubelet[3133]: I0621 04:48:07.370497 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6963d713-8ef4-402e-87e8-357650a64194-cilium-config-path\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371855 kubelet[3133]: I0621 04:48:07.370514 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-host-proc-sys-kernel\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371855 kubelet[3133]: I0621 04:48:07.370530 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-lib-modules\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371855 kubelet[3133]: I0621 04:48:07.370549 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cilium-run\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371855 kubelet[3133]: I0621 04:48:07.370565 3133 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-etc-cni-netd\") pod \"6963d713-8ef4-402e-87e8-357650a64194\" (UID: \"6963d713-8ef4-402e-87e8-357650a64194\") " Jun 21 04:48:07.371855 kubelet[3133]: I0621 04:48:07.370601 3133 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-bpf-maps\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.371999 kubelet[3133]: I0621 04:48:07.370621 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.374009 kubelet[3133]: I0621 04:48:07.373984 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.374279 kubelet[3133]: I0621 04:48:07.374134 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b84089-9d19-476d-998c-297d2d0892dd-kube-api-access-m6cp4" (OuterVolumeSpecName: "kube-api-access-m6cp4") pod "c7b84089-9d19-476d-998c-297d2d0892dd" (UID: "c7b84089-9d19-476d-998c-297d2d0892dd"). InnerVolumeSpecName "kube-api-access-m6cp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 04:48:07.374741 kubelet[3133]: I0621 04:48:07.374718 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cni-path" (OuterVolumeSpecName: "cni-path") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.374836 kubelet[3133]: I0621 04:48:07.374827 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-hostproc" (OuterVolumeSpecName: "hostproc") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.375056 kubelet[3133]: I0621 04:48:07.374878 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.375136 kubelet[3133]: I0621 04:48:07.375127 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.375342 kubelet[3133]: I0621 04:48:07.375322 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.375389 kubelet[3133]: I0621 04:48:07.375327 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.375652 kubelet[3133]: I0621 04:48:07.375636 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6963d713-8ef4-402e-87e8-357650a64194-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 04:48:07.375708 kubelet[3133]: I0621 04:48:07.375697 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:07.377567 kubelet[3133]: I0621 04:48:07.377543 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6963d713-8ef4-402e-87e8-357650a64194-kube-api-access-9zhcr" (OuterVolumeSpecName: "kube-api-access-9zhcr") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "kube-api-access-9zhcr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 04:48:07.377567 kubelet[3133]: I0621 04:48:07.377551 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6963d713-8ef4-402e-87e8-357650a64194-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 04:48:07.378040 kubelet[3133]: I0621 04:48:07.378028 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b84089-9d19-476d-998c-297d2d0892dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7b84089-9d19-476d-998c-297d2d0892dd" (UID: "c7b84089-9d19-476d-998c-297d2d0892dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 04:48:07.378677 kubelet[3133]: I0621 04:48:07.378661 3133 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6963d713-8ef4-402e-87e8-357650a64194-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6963d713-8ef4-402e-87e8-357650a64194" (UID: "6963d713-8ef4-402e-87e8-357650a64194"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 04:48:07.471353 kubelet[3133]: I0621 04:48:07.471327 3133 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cilium-cgroup\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471353 kubelet[3133]: I0621 04:48:07.471350 3133 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cni-path\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471353 kubelet[3133]: I0621 04:48:07.471358 3133 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-hostproc\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471479 kubelet[3133]: I0621 04:48:07.471367 3133 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6963d713-8ef4-402e-87e8-357650a64194-cilium-config-path\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471479 kubelet[3133]: I0621 04:48:07.471376 3133 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-host-proc-sys-net\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471479 kubelet[3133]: I0621 04:48:07.471387 3133 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9zhcr\" (UniqueName: \"kubernetes.io/projected/6963d713-8ef4-402e-87e8-357650a64194-kube-api-access-9zhcr\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471479 kubelet[3133]: I0621 04:48:07.471395 3133 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-host-proc-sys-kernel\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471479 kubelet[3133]: I0621 04:48:07.471405 3133 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-etc-cni-netd\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471479 kubelet[3133]: I0621 04:48:07.471413 3133 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-lib-modules\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471479 kubelet[3133]: I0621 04:48:07.471421 3133 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-cilium-run\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471479 kubelet[3133]: I0621 04:48:07.471429 3133 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6963d713-8ef4-402e-87e8-357650a64194-clustermesh-secrets\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471621 kubelet[3133]: I0621 04:48:07.471438 3133 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6963d713-8ef4-402e-87e8-357650a64194-xtables-lock\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471621 kubelet[3133]: I0621 04:48:07.471447 3133 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6963d713-8ef4-402e-87e8-357650a64194-hubble-tls\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471621 kubelet[3133]: I0621 04:48:07.471455 3133 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6cp4\" (UniqueName: \"kubernetes.io/projected/c7b84089-9d19-476d-998c-297d2d0892dd-kube-api-access-m6cp4\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.471621 kubelet[3133]: I0621 04:48:07.471465 3133 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7b84089-9d19-476d-998c-297d2d0892dd-cilium-config-path\") on node \"ci-4372.0.0-a-1fcff97c08\" DevicePath \"\"" Jun 21 04:48:07.918970 kubelet[3133]: I0621 04:48:07.918897 3133 scope.go:117] "RemoveContainer" containerID="b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5" Jun 21 04:48:07.921407 containerd[1723]: time="2025-06-21T04:48:07.921285833Z" level=info msg="RemoveContainer for \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\"" Jun 21 04:48:07.925350 systemd[1]: Removed slice kubepods-besteffort-podc7b84089_9d19_476d_998c_297d2d0892dd.slice - libcontainer container kubepods-besteffort-podc7b84089_9d19_476d_998c_297d2d0892dd.slice. Jun 21 04:48:07.929923 systemd[1]: Removed slice kubepods-burstable-pod6963d713_8ef4_402e_87e8_357650a64194.slice - libcontainer container kubepods-burstable-pod6963d713_8ef4_402e_87e8_357650a64194.slice. Jun 21 04:48:07.930305 systemd[1]: kubepods-burstable-pod6963d713_8ef4_402e_87e8_357650a64194.slice: Consumed 4.819s CPU time, 124M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 04:48:07.931898 containerd[1723]: time="2025-06-21T04:48:07.931864626Z" level=info msg="RemoveContainer for \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" returns successfully" Jun 21 04:48:07.932095 kubelet[3133]: I0621 04:48:07.932078 3133 scope.go:117] "RemoveContainer" containerID="b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5" Jun 21 04:48:07.932389 containerd[1723]: time="2025-06-21T04:48:07.932310680Z" level=error msg="ContainerStatus for \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\": not found" Jun 21 04:48:07.932873 kubelet[3133]: E0621 04:48:07.932850 3133 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\": not found" containerID="b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5" Jun 21 04:48:07.932962 kubelet[3133]: I0621 04:48:07.932880 3133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5"} err="failed to get container status \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2f26556ea70e3a8368a97771170ede596fd8914a3bafba20a5d604088621ad5\": not found" Jun 21 04:48:07.932999 kubelet[3133]: I0621 04:48:07.932966 3133 scope.go:117] "RemoveContainer" containerID="58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b" Jun 21 04:48:07.935375 containerd[1723]: time="2025-06-21T04:48:07.935347500Z" level=info msg="RemoveContainer for \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\"" Jun 21 04:48:07.942009 containerd[1723]: time="2025-06-21T04:48:07.941972659Z" level=info msg="RemoveContainer for \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" returns successfully" Jun 21 04:48:07.942161 kubelet[3133]: I0621 04:48:07.942143 3133 scope.go:117] "RemoveContainer" containerID="89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b" Jun 21 04:48:07.943227 containerd[1723]: time="2025-06-21T04:48:07.943165293Z" level=info msg="RemoveContainer for \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\"" Jun 21 04:48:07.951343 containerd[1723]: time="2025-06-21T04:48:07.951294536Z" level=info msg="RemoveContainer for \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\" returns successfully" Jun 21 04:48:07.951567 kubelet[3133]: I0621 04:48:07.951550 3133 scope.go:117] "RemoveContainer" containerID="741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2" Jun 21 04:48:07.953678 containerd[1723]: time="2025-06-21T04:48:07.953651644Z" level=info msg="RemoveContainer for \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\"" Jun 21 04:48:07.960679 containerd[1723]: time="2025-06-21T04:48:07.960625522Z" level=info msg="RemoveContainer for \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\" returns successfully" Jun 21 04:48:07.960872 kubelet[3133]: I0621 04:48:07.960837 3133 scope.go:117] "RemoveContainer" containerID="40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307" Jun 21 04:48:07.962127 containerd[1723]: time="2025-06-21T04:48:07.962094537Z" level=info msg="RemoveContainer for \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\"" Jun 21 04:48:07.967591 containerd[1723]: time="2025-06-21T04:48:07.967563440Z" level=info msg="RemoveContainer for \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\" returns successfully" Jun 21 04:48:07.967726 kubelet[3133]: I0621 04:48:07.967713 3133 scope.go:117] "RemoveContainer" containerID="76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c" Jun 21 04:48:07.968962 containerd[1723]: time="2025-06-21T04:48:07.968938014Z" level=info msg="RemoveContainer for \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\"" Jun 21 04:48:07.976361 containerd[1723]: time="2025-06-21T04:48:07.976319620Z" level=info msg="RemoveContainer for \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\" returns successfully" Jun 21 04:48:07.976499 kubelet[3133]: I0621 04:48:07.976482 3133 scope.go:117] "RemoveContainer" containerID="58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b" Jun 21 04:48:07.976715 containerd[1723]: time="2025-06-21T04:48:07.976689874Z" level=error msg="ContainerStatus for \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\": not found" Jun 21 04:48:07.976787 kubelet[3133]: E0621 04:48:07.976770 3133 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\": not found" containerID="58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b" Jun 21 04:48:07.976823 kubelet[3133]: I0621 04:48:07.976793 3133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b"} err="failed to get container status \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"58efa8da0998a3a1e9283c22beb3be74588363160a6c03619d800f73570fee3b\": not found" Jun 21 04:48:07.976823 kubelet[3133]: I0621 04:48:07.976810 3133 scope.go:117] "RemoveContainer" containerID="89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b" Jun 21 04:48:07.976950 containerd[1723]: time="2025-06-21T04:48:07.976929385Z" level=error msg="ContainerStatus for \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\": not found" Jun 21 04:48:07.977034 kubelet[3133]: E0621 04:48:07.977016 3133 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\": not found" containerID="89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b" Jun 21 04:48:07.977062 kubelet[3133]: I0621 04:48:07.977035 3133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b"} err="failed to get container status \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\": rpc error: code = NotFound desc = an error occurred when try to find container \"89839ae428bd60fab8aa4193abdbbb60d7ce8f80e2930ce542d54de6bbaea71b\": not found" Jun 21 04:48:07.977062 kubelet[3133]: I0621 04:48:07.977050 3133 scope.go:117] "RemoveContainer" containerID="741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2" Jun 21 04:48:07.977209 containerd[1723]: time="2025-06-21T04:48:07.977162170Z" level=error msg="ContainerStatus for \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\": not found" Jun 21 04:48:07.977287 kubelet[3133]: E0621 04:48:07.977270 3133 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\": not found" containerID="741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2" Jun 21 04:48:07.977330 kubelet[3133]: I0621 04:48:07.977288 3133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2"} err="failed to get container status \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"741c9acfb790c285a1fe3d3e17c38b20b9314541280e8c785850ee0b5ea542e2\": not found" Jun 21 04:48:07.977330 kubelet[3133]: I0621 04:48:07.977307 3133 scope.go:117] "RemoveContainer" containerID="40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307" Jun 21 04:48:07.977496 containerd[1723]: time="2025-06-21T04:48:07.977461745Z" level=error msg="ContainerStatus for \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\": not found" Jun 21 04:48:07.977563 kubelet[3133]: E0621 04:48:07.977549 3133 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\": not found" containerID="40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307" Jun 21 04:48:07.977590 kubelet[3133]: I0621 04:48:07.977567 3133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307"} err="failed to get container status \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\": rpc error: code = NotFound desc = an error occurred when try to find container \"40680d63cca2b5badcae5013d40c2c2bce15083e06b89b5b71bf98c8ba5c9307\": not found" Jun 21 04:48:07.977590 kubelet[3133]: I0621 04:48:07.977582 3133 scope.go:117] "RemoveContainer" containerID="76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c" Jun 21 04:48:07.977717 containerd[1723]: time="2025-06-21T04:48:07.977684925Z" level=error msg="ContainerStatus for \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\": not found" Jun 21 04:48:07.977789 kubelet[3133]: E0621 04:48:07.977776 3133 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\": not found" containerID="76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c" Jun 21 04:48:07.977815 kubelet[3133]: I0621 04:48:07.977790 3133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c"} err="failed to get container status \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\": rpc error: code = NotFound desc = an error occurred when try to find container \"76ef340e72449ff4913ae12c3e4a58d3a32ce7384ce55b654c2bd76ff347f08c\": not found" Jun 21 04:48:08.143654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14-shm.mount: Deactivated successfully. Jun 21 04:48:08.143732 systemd[1]: var-lib-kubelet-pods-c7b84089\x2d9d19\x2d476d\x2d998c\x2d297d2d0892dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6cp4.mount: Deactivated successfully. Jun 21 04:48:08.143786 systemd[1]: var-lib-kubelet-pods-6963d713\x2d8ef4\x2d402e\x2d87e8\x2d357650a64194-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9zhcr.mount: Deactivated successfully. Jun 21 04:48:08.143834 systemd[1]: var-lib-kubelet-pods-6963d713\x2d8ef4\x2d402e\x2d87e8\x2d357650a64194-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 21 04:48:08.143881 systemd[1]: var-lib-kubelet-pods-6963d713\x2d8ef4\x2d402e\x2d87e8\x2d357650a64194-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 21 04:48:08.584782 kubelet[3133]: I0621 04:48:08.584741 3133 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6963d713-8ef4-402e-87e8-357650a64194" path="/var/lib/kubelet/pods/6963d713-8ef4-402e-87e8-357650a64194/volumes" Jun 21 04:48:08.585417 kubelet[3133]: I0621 04:48:08.585395 3133 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b84089-9d19-476d-998c-297d2d0892dd" path="/var/lib/kubelet/pods/c7b84089-9d19-476d-998c-297d2d0892dd/volumes" Jun 21 04:48:09.129754 sshd[4672]: Connection closed by 10.200.16.10 port 60468 Jun 21 04:48:09.130449 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:09.133418 systemd[1]: sshd@22-10.200.8.45:22-10.200.16.10:60468.service: Deactivated successfully. Jun 21 04:48:09.135190 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 04:48:09.136498 systemd-logind[1702]: Session 25 logged out. Waiting for processes to exit. Jun 21 04:48:09.137811 systemd-logind[1702]: Removed session 25. Jun 21 04:48:09.244864 systemd[1]: Started sshd@23-10.200.8.45:22-10.200.16.10:40298.service - OpenSSH per-connection server daemon (10.200.16.10:40298). Jun 21 04:48:09.664953 kubelet[3133]: E0621 04:48:09.664735 3133 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 04:48:09.873885 sshd[4824]: Accepted publickey for core from 10.200.16.10 port 40298 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:09.874943 sshd-session[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:09.878929 systemd-logind[1702]: New session 26 of user core. Jun 21 04:48:09.889410 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 21 04:48:10.652063 kubelet[3133]: I0621 04:48:10.651940 3133 memory_manager.go:355] "RemoveStaleState removing state" podUID="6963d713-8ef4-402e-87e8-357650a64194" containerName="cilium-agent" Jun 21 04:48:10.652377 kubelet[3133]: I0621 04:48:10.652136 3133 memory_manager.go:355] "RemoveStaleState removing state" podUID="c7b84089-9d19-476d-998c-297d2d0892dd" containerName="cilium-operator" Jun 21 04:48:10.663772 systemd[1]: Created slice kubepods-burstable-pod7bfade62_dc96_4bd5_8bbb_51a476eb7b18.slice - libcontainer container kubepods-burstable-pod7bfade62_dc96_4bd5_8bbb_51a476eb7b18.slice. Jun 21 04:48:10.760180 sshd[4826]: Connection closed by 10.200.16.10 port 40298 Jun 21 04:48:10.760619 sshd-session[4824]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:10.763436 systemd[1]: sshd@23-10.200.8.45:22-10.200.16.10:40298.service: Deactivated successfully. Jun 21 04:48:10.764956 systemd[1]: session-26.scope: Deactivated successfully. Jun 21 04:48:10.765665 systemd-logind[1702]: Session 26 logged out. Waiting for processes to exit. Jun 21 04:48:10.766783 systemd-logind[1702]: Removed session 26. Jun 21 04:48:10.788924 kubelet[3133]: I0621 04:48:10.788892 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-etc-cni-netd\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789267 kubelet[3133]: I0621 04:48:10.788931 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-hubble-tls\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789267 kubelet[3133]: I0621 04:48:10.788951 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mps94\" (UniqueName: \"kubernetes.io/projected/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-kube-api-access-mps94\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789267 kubelet[3133]: I0621 04:48:10.788968 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-lib-modules\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789267 kubelet[3133]: I0621 04:48:10.788991 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-cilium-config-path\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789267 kubelet[3133]: I0621 04:48:10.789006 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-host-proc-sys-net\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789267 kubelet[3133]: I0621 04:48:10.789021 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-cilium-run\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789366 kubelet[3133]: I0621 04:48:10.789036 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-hostproc\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789366 kubelet[3133]: I0621 04:48:10.789097 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-cni-path\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789366 kubelet[3133]: I0621 04:48:10.789116 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-xtables-lock\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789366 kubelet[3133]: I0621 04:48:10.789130 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-cilium-ipsec-secrets\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789366 kubelet[3133]: I0621 04:48:10.789148 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-host-proc-sys-kernel\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789366 kubelet[3133]: I0621 04:48:10.789167 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-bpf-maps\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789449 kubelet[3133]: I0621 04:48:10.789182 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-cilium-cgroup\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.789449 kubelet[3133]: I0621 04:48:10.789202 3133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bfade62-dc96-4bd5-8bbb-51a476eb7b18-clustermesh-secrets\") pod \"cilium-c7qmj\" (UID: \"7bfade62-dc96-4bd5-8bbb-51a476eb7b18\") " pod="kube-system/cilium-c7qmj" Jun 21 04:48:10.913163 systemd[1]: Started sshd@24-10.200.8.45:22-10.200.16.10:40300.service - OpenSSH per-connection server daemon (10.200.16.10:40300). Jun 21 04:48:10.968035 containerd[1723]: time="2025-06-21T04:48:10.968004943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7qmj,Uid:7bfade62-dc96-4bd5-8bbb-51a476eb7b18,Namespace:kube-system,Attempt:0,}" Jun 21 04:48:10.996662 containerd[1723]: time="2025-06-21T04:48:10.996627101Z" level=info msg="connecting to shim 9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f" address="unix:///run/containerd/s/55924628b88354d35eef4ccb494cdf0c18a497abb784310fe8eb68192cc452d9" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:48:11.014423 systemd[1]: Started cri-containerd-9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f.scope - libcontainer container 9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f. Jun 21 04:48:11.036146 containerd[1723]: time="2025-06-21T04:48:11.036123466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7qmj,Uid:7bfade62-dc96-4bd5-8bbb-51a476eb7b18,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\"" Jun 21 04:48:11.038079 containerd[1723]: time="2025-06-21T04:48:11.038056835Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 04:48:11.049524 containerd[1723]: time="2025-06-21T04:48:11.049501222Z" level=info msg="Container effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:11.064022 containerd[1723]: time="2025-06-21T04:48:11.063999383Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897\"" Jun 21 04:48:11.064499 containerd[1723]: time="2025-06-21T04:48:11.064361521Z" level=info msg="StartContainer for \"effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897\"" Jun 21 04:48:11.065127 containerd[1723]: time="2025-06-21T04:48:11.065104152Z" level=info msg="connecting to shim effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897" address="unix:///run/containerd/s/55924628b88354d35eef4ccb494cdf0c18a497abb784310fe8eb68192cc452d9" protocol=ttrpc version=3 Jun 21 04:48:11.079429 systemd[1]: Started cri-containerd-effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897.scope - libcontainer container effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897. Jun 21 04:48:11.101423 containerd[1723]: time="2025-06-21T04:48:11.101368793Z" level=info msg="StartContainer for \"effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897\" returns successfully" Jun 21 04:48:11.104802 systemd[1]: cri-containerd-effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897.scope: Deactivated successfully. Jun 21 04:48:11.108245 containerd[1723]: time="2025-06-21T04:48:11.108224146Z" level=info msg="received exit event container_id:\"effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897\" id:\"effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897\" pid:4903 exited_at:{seconds:1750481291 nanos:108058780}" Jun 21 04:48:11.108466 containerd[1723]: time="2025-06-21T04:48:11.108328221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897\" id:\"effd9298b30fbd1a62519a8c3b304be6672863a88690dedfb376de5f56e51897\" pid:4903 exited_at:{seconds:1750481291 nanos:108058780}" Jun 21 04:48:11.560605 sshd[4841]: Accepted publickey for core from 10.200.16.10 port 40300 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:11.561985 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:11.566395 systemd-logind[1702]: New session 27 of user core. Jun 21 04:48:11.576388 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 21 04:48:11.937758 containerd[1723]: time="2025-06-21T04:48:11.937362777Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 04:48:11.953723 containerd[1723]: time="2025-06-21T04:48:11.953421617Z" level=info msg="Container 7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:11.968625 containerd[1723]: time="2025-06-21T04:48:11.968600411Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2\"" Jun 21 04:48:11.969147 containerd[1723]: time="2025-06-21T04:48:11.969063969Z" level=info msg="StartContainer for \"7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2\"" Jun 21 04:48:11.969779 containerd[1723]: time="2025-06-21T04:48:11.969748085Z" level=info msg="connecting to shim 7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2" address="unix:///run/containerd/s/55924628b88354d35eef4ccb494cdf0c18a497abb784310fe8eb68192cc452d9" protocol=ttrpc version=3 Jun 21 04:48:11.990390 systemd[1]: Started cri-containerd-7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2.scope - libcontainer container 7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2. Jun 21 04:48:12.000835 sshd[4934]: Connection closed by 10.200.16.10 port 40300 Jun 21 04:48:12.001342 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:12.005784 systemd[1]: sshd@24-10.200.8.45:22-10.200.16.10:40300.service: Deactivated successfully. Jun 21 04:48:12.009089 systemd[1]: session-27.scope: Deactivated successfully. Jun 21 04:48:12.011212 systemd-logind[1702]: Session 27 logged out. Waiting for processes to exit. Jun 21 04:48:12.014403 systemd-logind[1702]: Removed session 27. Jun 21 04:48:12.019217 containerd[1723]: time="2025-06-21T04:48:12.019192457Z" level=info msg="StartContainer for \"7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2\" returns successfully" Jun 21 04:48:12.020919 systemd[1]: cri-containerd-7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2.scope: Deactivated successfully. Jun 21 04:48:12.021659 containerd[1723]: time="2025-06-21T04:48:12.021626241Z" level=info msg="received exit event container_id:\"7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2\" id:\"7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2\" pid:4949 exited_at:{seconds:1750481292 nanos:21364462}" Jun 21 04:48:12.021921 containerd[1723]: time="2025-06-21T04:48:12.021895667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2\" id:\"7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2\" pid:4949 exited_at:{seconds:1750481292 nanos:21364462}" Jun 21 04:48:12.033922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d9c7b8a10eb41728e0f88f965476e41a9416f6fdfb8dee95b33769bd4fde3a2-rootfs.mount: Deactivated successfully. Jun 21 04:48:12.110912 systemd[1]: Started sshd@25-10.200.8.45:22-10.200.16.10:40308.service - OpenSSH per-connection server daemon (10.200.16.10:40308). Jun 21 04:48:12.748653 sshd[4986]: Accepted publickey for core from 10.200.16.10 port 40308 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:12.749752 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:12.754177 systemd-logind[1702]: New session 28 of user core. Jun 21 04:48:12.759403 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 21 04:48:12.941590 containerd[1723]: time="2025-06-21T04:48:12.941551723Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 04:48:12.956894 containerd[1723]: time="2025-06-21T04:48:12.956760852Z" level=info msg="Container 914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:12.965098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738810144.mount: Deactivated successfully. Jun 21 04:48:12.976392 containerd[1723]: time="2025-06-21T04:48:12.976365779Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95\"" Jun 21 04:48:12.976937 containerd[1723]: time="2025-06-21T04:48:12.976840595Z" level=info msg="StartContainer for \"914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95\"" Jun 21 04:48:12.978621 containerd[1723]: time="2025-06-21T04:48:12.978585382Z" level=info msg="connecting to shim 914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95" address="unix:///run/containerd/s/55924628b88354d35eef4ccb494cdf0c18a497abb784310fe8eb68192cc452d9" protocol=ttrpc version=3 Jun 21 04:48:12.998378 systemd[1]: Started cri-containerd-914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95.scope - libcontainer container 914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95. Jun 21 04:48:13.040996 systemd[1]: cri-containerd-914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95.scope: Deactivated successfully. Jun 21 04:48:13.044007 containerd[1723]: time="2025-06-21T04:48:13.043941721Z" level=info msg="received exit event container_id:\"914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95\" id:\"914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95\" pid:5003 exited_at:{seconds:1750481293 nanos:43698183}" Jun 21 04:48:13.044007 containerd[1723]: time="2025-06-21T04:48:13.043965653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95\" id:\"914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95\" pid:5003 exited_at:{seconds:1750481293 nanos:43698183}" Jun 21 04:48:13.046157 containerd[1723]: time="2025-06-21T04:48:13.045148201Z" level=info msg="StartContainer for \"914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95\" returns successfully" Jun 21 04:48:13.058907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-914e7d17d2bc63999f0653058671d52af4dd27fa8e24ff12f7f684fbf2077e95-rootfs.mount: Deactivated successfully. Jun 21 04:48:13.946444 containerd[1723]: time="2025-06-21T04:48:13.946392881Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 04:48:13.964283 containerd[1723]: time="2025-06-21T04:48:13.962979570Z" level=info msg="Container 4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:13.975051 containerd[1723]: time="2025-06-21T04:48:13.975027410Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab\"" Jun 21 04:48:13.975431 containerd[1723]: time="2025-06-21T04:48:13.975386050Z" level=info msg="StartContainer for \"4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab\"" Jun 21 04:48:13.976484 containerd[1723]: time="2025-06-21T04:48:13.976401970Z" level=info msg="connecting to shim 4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab" address="unix:///run/containerd/s/55924628b88354d35eef4ccb494cdf0c18a497abb784310fe8eb68192cc452d9" protocol=ttrpc version=3 Jun 21 04:48:13.995390 systemd[1]: Started cri-containerd-4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab.scope - libcontainer container 4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab. Jun 21 04:48:14.015940 systemd[1]: cri-containerd-4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab.scope: Deactivated successfully. Jun 21 04:48:14.016575 containerd[1723]: time="2025-06-21T04:48:14.016354080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab\" id:\"4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab\" pid:5050 exited_at:{seconds:1750481294 nanos:16157482}" Jun 21 04:48:14.019612 containerd[1723]: time="2025-06-21T04:48:14.019423872Z" level=info msg="received exit event container_id:\"4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab\" id:\"4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab\" pid:5050 exited_at:{seconds:1750481294 nanos:16157482}" Jun 21 04:48:14.025156 containerd[1723]: time="2025-06-21T04:48:14.025136100Z" level=info msg="StartContainer for \"4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab\" returns successfully" Jun 21 04:48:14.034377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fa1ed1bf349a8ac47b0cbb2d3f55a07b0e12ff7c4e20dda654a6d52f93ed1ab-rootfs.mount: Deactivated successfully. Jun 21 04:48:14.665973 kubelet[3133]: E0621 04:48:14.665929 3133 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 04:48:14.951488 containerd[1723]: time="2025-06-21T04:48:14.951387829Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 04:48:14.968897 containerd[1723]: time="2025-06-21T04:48:14.968298961Z" level=info msg="Container ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:14.973429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981305972.mount: Deactivated successfully. Jun 21 04:48:14.982081 containerd[1723]: time="2025-06-21T04:48:14.982057336Z" level=info msg="CreateContainer within sandbox \"9fe88ae9cb326ba9c4a0d75241ef763744bff81fdb5e4d26b963e255f32f656f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e\"" Jun 21 04:48:14.982714 containerd[1723]: time="2025-06-21T04:48:14.982613393Z" level=info msg="StartContainer for \"ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e\"" Jun 21 04:48:14.983651 containerd[1723]: time="2025-06-21T04:48:14.983615369Z" level=info msg="connecting to shim ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e" address="unix:///run/containerd/s/55924628b88354d35eef4ccb494cdf0c18a497abb784310fe8eb68192cc452d9" protocol=ttrpc version=3 Jun 21 04:48:15.002370 systemd[1]: Started cri-containerd-ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e.scope - libcontainer container ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e. Jun 21 04:48:15.030244 containerd[1723]: time="2025-06-21T04:48:15.030219340Z" level=info msg="StartContainer for \"ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e\" returns successfully" Jun 21 04:48:15.077043 containerd[1723]: time="2025-06-21T04:48:15.077017683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e\" id:\"ad8c22d842dbbc53b979696a0505d564ca69795dcb3747a81d048ba5223f67f5\" pid:5121 exited_at:{seconds:1750481295 nanos:76640623}" Jun 21 04:48:15.325274 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jun 21 04:48:17.332769 containerd[1723]: time="2025-06-21T04:48:17.332731114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e\" id:\"94085b452d77931a570ec9188ff7aacee1731f717fe6c3b2c6b028fb58bcd48f\" pid:5463 exit_status:1 exited_at:{seconds:1750481297 nanos:332131215}" Jun 21 04:48:17.761636 systemd-networkd[1365]: lxc_health: Link UP Jun 21 04:48:17.764296 systemd-networkd[1365]: lxc_health: Gained carrier Jun 21 04:48:18.970939 kubelet[3133]: I0621 04:48:18.970887 3133 setters.go:602] "Node became not ready" node="ci-4372.0.0-a-1fcff97c08" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-21T04:48:18Z","lastTransitionTime":"2025-06-21T04:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 21 04:48:18.986373 systemd-networkd[1365]: lxc_health: Gained IPv6LL Jun 21 04:48:19.096505 kubelet[3133]: I0621 04:48:19.096447 3133 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c7qmj" podStartSLOduration=9.09641366 podStartE2EDuration="9.09641366s" podCreationTimestamp="2025-06-21 04:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:48:15.968593514 +0000 UTC m=+171.460578574" watchObservedRunningTime="2025-06-21 04:48:19.09641366 +0000 UTC m=+174.588398715" Jun 21 04:48:19.445437 containerd[1723]: time="2025-06-21T04:48:19.445337098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e\" id:\"197ce2800f5db4ec67fd44d10fbf2bad448cfd44c3dc055e6ba46d64e6f8d86c\" pid:5653 exited_at:{seconds:1750481299 nanos:445129230}" Jun 21 04:48:21.541392 containerd[1723]: time="2025-06-21T04:48:21.541189589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e\" id:\"f1ac12cf0721046a762c5047779489c501b82b0800d656b7f7ababc1e04926a2\" pid:5687 exited_at:{seconds:1750481301 nanos:540866911}" Jun 21 04:48:23.621688 containerd[1723]: time="2025-06-21T04:48:23.621584244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea3c6a5df5aba4e28512e65ad62547db445f311483c383f21f74ab1750704d2e\" id:\"363c417fefd53ade519d02506817a32f9a0a44bbc3a7f2821d8c08012a7c7960\" pid:5709 exited_at:{seconds:1750481303 nanos:621088478}" Jun 21 04:48:23.623534 kubelet[3133]: E0621 04:48:23.623461 3133 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49986->127.0.0.1:35619: write tcp 127.0.0.1:49986->127.0.0.1:35619: write: broken pipe Jun 21 04:48:23.726436 sshd[4989]: Connection closed by 10.200.16.10 port 40308 Jun 21 04:48:23.726920 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:23.730057 systemd[1]: sshd@25-10.200.8.45:22-10.200.16.10:40308.service: Deactivated successfully. Jun 21 04:48:23.731598 systemd[1]: session-28.scope: Deactivated successfully. Jun 21 04:48:23.732245 systemd-logind[1702]: Session 28 logged out. Waiting for processes to exit. Jun 21 04:48:23.733525 systemd-logind[1702]: Removed session 28. Jun 21 04:48:24.595942 containerd[1723]: time="2025-06-21T04:48:24.595905204Z" level=info msg="StopPodSandbox for \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\"" Jun 21 04:48:24.596101 containerd[1723]: time="2025-06-21T04:48:24.596055586Z" level=info msg="TearDown network for sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" successfully" Jun 21 04:48:24.596101 containerd[1723]: time="2025-06-21T04:48:24.596067194Z" level=info msg="StopPodSandbox for \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" returns successfully" Jun 21 04:48:24.596508 containerd[1723]: time="2025-06-21T04:48:24.596489763Z" level=info msg="RemovePodSandbox for \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\"" Jun 21 04:48:24.596576 containerd[1723]: time="2025-06-21T04:48:24.596511764Z" level=info msg="Forcibly stopping sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\"" Jun 21 04:48:24.596600 containerd[1723]: time="2025-06-21T04:48:24.596582807Z" level=info msg="TearDown network for sandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" successfully" Jun 21 04:48:24.597535 containerd[1723]: time="2025-06-21T04:48:24.597515809Z" level=info msg="Ensure that sandbox f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875 in task-service has been cleanup successfully" Jun 21 04:48:24.604731 containerd[1723]: time="2025-06-21T04:48:24.604707685Z" level=info msg="RemovePodSandbox \"f73367f6dab1035f0afd75a14df2bc6aa2912f35ef726f37d1abdc1d6e844875\" returns successfully" Jun 21 04:48:24.605127 containerd[1723]: time="2025-06-21T04:48:24.605093644Z" level=info msg="StopPodSandbox for \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\"" Jun 21 04:48:24.605243 containerd[1723]: time="2025-06-21T04:48:24.605228926Z" level=info msg="TearDown network for sandbox \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" successfully" Jun 21 04:48:24.605349 containerd[1723]: time="2025-06-21T04:48:24.605239965Z" level=info msg="StopPodSandbox for \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" returns successfully" Jun 21 04:48:24.605539 containerd[1723]: time="2025-06-21T04:48:24.605523928Z" level=info msg="RemovePodSandbox for \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\"" Jun 21 04:48:24.605584 containerd[1723]: time="2025-06-21T04:48:24.605543155Z" level=info msg="Forcibly stopping sandbox \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\"" Jun 21 04:48:24.605639 containerd[1723]: time="2025-06-21T04:48:24.605628586Z" level=info msg="TearDown network for sandbox \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" successfully" Jun 21 04:48:24.606590 containerd[1723]: time="2025-06-21T04:48:24.606571271Z" level=info msg="Ensure that sandbox 7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14 in task-service has been cleanup successfully" Jun 21 04:48:24.613607 containerd[1723]: time="2025-06-21T04:48:24.613577745Z" level=info msg="RemovePodSandbox \"7a413417b770a2847fad15c2cbb23d6ab0f9f85285425dd0e14a1045c03e4c14\" returns successfully"