Jun 20 19:14:55.998050 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:14:55.998083 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:55.998093 kernel: BIOS-provided physical RAM map: Jun 20 19:14:55.998100 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 19:14:55.998106 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 20 19:14:55.998112 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jun 20 19:14:55.998120 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jun 20 19:14:55.998128 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jun 20 19:14:55.998134 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jun 20 19:14:55.998140 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 20 19:14:55.998147 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 20 19:14:55.998153 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 20 19:14:55.998159 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 20 19:14:55.998166 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 20 19:14:55.998176 kernel: NX (Execute Disable) protection: active Jun 20 19:14:55.998183 kernel: APIC: Static calls initialized Jun 20 19:14:55.998190 kernel: efi: EFI v2.7 by Microsoft Jun 20 19:14:55.998198 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5718 RNG=0x3ffd2018 Jun 20 19:14:55.998205 kernel: random: crng init done Jun 20 19:14:55.998212 kernel: secureboot: Secure boot disabled Jun 20 19:14:55.998219 kernel: SMBIOS 3.1.0 present. Jun 20 19:14:55.998226 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jun 20 19:14:55.998234 kernel: DMI: Memory slots populated: 2/2 Jun 20 19:14:55.998243 kernel: Hypervisor detected: Microsoft Hyper-V Jun 20 19:14:55.998250 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 20 19:14:55.998257 kernel: Hyper-V: Nested features: 0x3e0101 Jun 20 19:14:55.998264 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 20 19:14:55.998272 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 20 19:14:55.998279 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:14:55.998287 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:14:55.998294 kernel: tsc: Detected 2300.000 MHz processor Jun 20 19:14:55.998301 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:14:55.998309 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:14:55.998319 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 20 19:14:55.998327 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 19:14:55.998335 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:14:55.998342 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 20 19:14:55.998349 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 20 19:14:55.998357 kernel: Using GB pages for direct mapping Jun 20 19:14:55.998364 kernel: ACPI: Early table checksum verification disabled Jun 20 19:14:55.998375 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 20 19:14:55.998385 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:55.998393 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:55.998401 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 19:14:55.998408 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 20 19:14:55.998417 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:55.998425 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:55.998434 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:55.998442 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:14:55.998450 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:14:55.998458 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:55.998466 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 20 19:14:55.998474 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jun 20 19:14:55.998482 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 20 19:14:55.998490 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 20 19:14:55.998497 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 20 19:14:55.998507 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 20 19:14:55.998514 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 20 19:14:55.998522 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 20 19:14:55.998529 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 20 19:14:55.998537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 20 19:14:55.998545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 20 19:14:55.998554 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 20 19:14:55.998562 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 20 19:14:55.998571 kernel: Zone ranges: Jun 20 19:14:55.998581 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:14:55.998589 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:14:55.998598 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:14:55.998606 kernel: Device empty Jun 20 19:14:55.998613 kernel: Movable zone start for each node Jun 20 19:14:55.998621 kernel: Early memory node ranges Jun 20 19:14:55.998629 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 19:14:55.998637 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jun 20 19:14:55.998646 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jun 20 19:14:55.998656 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 20 19:14:55.998664 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:14:55.998672 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 20 19:14:55.998679 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:14:55.998687 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 19:14:55.998695 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jun 20 19:14:55.998703 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jun 20 19:14:55.998711 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 20 19:14:55.998719 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:14:55.998729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:14:55.998737 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:14:55.998745 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 20 19:14:55.998753 kernel: TSC deadline timer available Jun 20 19:14:55.998761 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:14:55.998769 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:14:55.998777 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:14:55.998785 kernel: CPU topo: Max. threads per core: 2 Jun 20 19:14:55.998794 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:14:55.998804 kernel: CPU topo: Num. threads per package: 2 Jun 20 19:14:55.998812 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:14:55.998821 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 20 19:14:55.998829 kernel: Booting paravirtualized kernel on Hyper-V Jun 20 19:14:55.998838 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:14:55.998847 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:14:55.998855 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:14:55.998863 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:14:55.998872 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:14:55.998882 kernel: Hyper-V: PV spinlocks enabled Jun 20 19:14:55.998891 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:14:55.998901 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:55.998910 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:14:55.998919 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 19:14:55.998947 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:14:55.998955 kernel: Fallback order for Node 0: 0 Jun 20 19:14:55.998961 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jun 20 19:14:55.998971 kernel: Policy zone: Normal Jun 20 19:14:55.998978 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:14:55.998985 kernel: software IO TLB: area num 2. Jun 20 19:14:55.998991 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:14:55.998998 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:14:55.999006 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:14:55.999013 kernel: Dynamic Preempt: voluntary Jun 20 19:14:55.999021 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:14:55.999030 kernel: rcu: RCU event tracing is enabled. Jun 20 19:14:55.999046 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:14:55.999055 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:14:55.999063 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:14:55.999073 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:14:55.999082 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:14:55.999091 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:14:55.999100 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:55.999109 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:55.999118 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:55.999127 kernel: Using NULL legacy PIC Jun 20 19:14:55.999138 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 20 19:14:55.999146 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:14:55.999155 kernel: Console: colour dummy device 80x25 Jun 20 19:14:55.999165 kernel: printk: legacy console [tty1] enabled Jun 20 19:14:55.999173 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:14:55.999180 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 20 19:14:55.999188 kernel: ACPI: Core revision 20240827 Jun 20 19:14:55.999196 kernel: Failed to register legacy timer interrupt Jun 20 19:14:55.999203 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:14:55.999211 kernel: x2apic enabled Jun 20 19:14:55.999219 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:14:55.999227 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jun 20 19:14:55.999235 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 19:14:55.999243 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 20 19:14:55.999252 kernel: Hyper-V: Using IPI hypercalls Jun 20 19:14:55.999260 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 20 19:14:55.999269 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 20 19:14:55.999277 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 20 19:14:55.999286 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 20 19:14:55.999294 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 20 19:14:55.999304 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 20 19:14:55.999313 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 20 19:14:55.999321 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jun 20 19:14:55.999329 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:14:55.999339 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:14:55.999348 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:14:55.999356 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:14:55.999365 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:14:55.999372 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:14:55.999380 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 19:14:55.999389 kernel: RETBleed: Vulnerable Jun 20 19:14:55.999396 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:14:55.999410 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:14:55.999418 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:14:55.999426 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:14:55.999436 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:14:55.999445 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 19:14:55.999454 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 19:14:55.999463 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 19:14:55.999471 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 20 19:14:55.999480 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 20 19:14:55.999488 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 20 19:14:55.999496 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:14:55.999505 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 20 19:14:55.999514 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 20 19:14:55.999522 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 20 19:14:55.999533 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 20 19:14:55.999542 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 20 19:14:55.999550 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 20 19:14:55.999559 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 20 19:14:55.999568 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:14:55.999577 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:14:55.999585 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:14:55.999594 kernel: landlock: Up and running. Jun 20 19:14:55.999602 kernel: SELinux: Initializing. Jun 20 19:14:55.999611 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:14:55.999620 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:14:55.999629 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 20 19:14:55.999640 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 20 19:14:55.999649 kernel: signal: max sigframe size: 11952 Jun 20 19:14:55.999657 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:14:55.999666 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:14:55.999675 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:14:55.999684 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:14:55.999693 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:14:55.999701 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:14:55.999710 kernel: .... node #0, CPUs: #1 Jun 20 19:14:55.999721 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:14:55.999729 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jun 20 19:14:55.999738 kernel: Memory: 8077020K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299992K reserved, 0K cma-reserved) Jun 20 19:14:55.999745 kernel: devtmpfs: initialized Jun 20 19:14:55.999753 kernel: x86/mm: Memory block size: 128MB Jun 20 19:14:55.999760 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 20 19:14:55.999768 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:14:55.999776 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:14:55.999785 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:14:55.999796 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:14:55.999804 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:14:55.999813 kernel: audit: type=2000 audit(1750446892.029:1): state=initialized audit_enabled=0 res=1 Jun 20 19:14:55.999822 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:14:55.999831 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:14:55.999841 kernel: cpuidle: using governor menu Jun 20 19:14:55.999850 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:14:55.999859 kernel: dca service started, version 1.12.1 Jun 20 19:14:55.999869 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jun 20 19:14:55.999879 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jun 20 19:14:55.999887 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:14:55.999895 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:14:55.999904 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:14:55.999913 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:14:55.999922 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:14:55.999946 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:14:55.999956 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:14:55.999968 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:14:55.999977 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:14:55.999985 kernel: ACPI: Interpreter enabled Jun 20 19:14:55.999995 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:14:56.000004 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:14:56.000012 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:14:56.000020 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 19:14:56.000028 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 20 19:14:56.000037 kernel: iommu: Default domain type: Translated Jun 20 19:14:56.000046 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:14:56.000056 kernel: efivars: Registered efivars operations Jun 20 19:14:56.000065 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:14:56.000074 kernel: PCI: System does not support PCI Jun 20 19:14:56.000081 kernel: vgaarb: loaded Jun 20 19:14:56.000090 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:14:56.000097 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:14:56.000106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:14:56.000115 kernel: pnp: PnP ACPI init Jun 20 19:14:56.000122 kernel: pnp: PnP ACPI: found 3 devices Jun 20 19:14:56.000132 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:14:56.000140 kernel: NET: Registered PF_INET protocol family Jun 20 19:14:56.000149 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:14:56.000158 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 19:14:56.000167 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:14:56.000176 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:14:56.000185 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:14:56.000194 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 19:14:56.000205 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:14:56.000214 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:14:56.000223 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:14:56.000231 kernel: NET: Registered PF_XDP protocol family Jun 20 19:14:56.000239 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:14:56.000248 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:14:56.000257 kernel: software IO TLB: mapped [mem 0x000000003a9c3000-0x000000003e9c3000] (64MB) Jun 20 19:14:56.000266 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 20 19:14:56.000275 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 20 19:14:56.000286 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 20 19:14:56.000295 kernel: clocksource: Switched to clocksource tsc Jun 20 19:14:56.000303 kernel: Initialise system trusted keyrings Jun 20 19:14:56.000312 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 19:14:56.000321 kernel: Key type asymmetric registered Jun 20 19:14:56.000329 kernel: Asymmetric key parser 'x509' registered Jun 20 19:14:56.000338 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:14:56.000347 kernel: io scheduler mq-deadline registered Jun 20 19:14:56.000356 kernel: io scheduler kyber registered Jun 20 19:14:56.000367 kernel: io scheduler bfq registered Jun 20 19:14:56.000376 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:14:56.000384 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:14:56.000393 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:14:56.000402 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 19:14:56.000411 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:14:56.000419 kernel: i8042: PNP: No PS/2 controller found. Jun 20 19:14:56.000563 kernel: rtc_cmos 00:02: registered as rtc0 Jun 20 19:14:56.000641 kernel: rtc_cmos 00:02: setting system clock to 2025-06-20T19:14:55 UTC (1750446895) Jun 20 19:14:56.000711 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 20 19:14:56.000722 kernel: intel_pstate: Intel P-state driver initializing Jun 20 19:14:56.000731 kernel: efifb: probing for efifb Jun 20 19:14:56.000741 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 19:14:56.000750 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 19:14:56.000759 kernel: efifb: scrolling: redraw Jun 20 19:14:56.000767 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:14:56.000776 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:14:56.000787 kernel: fb0: EFI VGA frame buffer device Jun 20 19:14:56.000795 kernel: pstore: Using crash dump compression: deflate Jun 20 19:14:56.000804 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:14:56.000813 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:14:56.000822 kernel: Segment Routing with IPv6 Jun 20 19:14:56.000831 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:14:56.000839 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:14:56.000875 kernel: Key type dns_resolver registered Jun 20 19:14:56.000884 kernel: IPI shorthand broadcast: enabled Jun 20 19:14:56.000896 kernel: sched_clock: Marking stable (3049004455, 100269264)->(3530757098, -381483379) Jun 20 19:14:56.000904 kernel: registered taskstats version 1 Jun 20 19:14:56.000913 kernel: Loading compiled-in X.509 certificates Jun 20 19:14:56.000923 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:14:56.000967 kernel: Demotion targets for Node 0: null Jun 20 19:14:56.000976 kernel: Key type .fscrypt registered Jun 20 19:14:56.000985 kernel: Key type fscrypt-provisioning registered Jun 20 19:14:56.000995 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:14:56.001003 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:14:56.001015 kernel: ima: No architecture policies found Jun 20 19:14:56.001024 kernel: clk: Disabling unused clocks Jun 20 19:14:56.001032 kernel: Warning: unable to open an initial console. Jun 20 19:14:56.001041 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:14:56.001050 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:14:56.001059 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:14:56.001068 kernel: Run /init as init process Jun 20 19:14:56.001076 kernel: with arguments: Jun 20 19:14:56.001085 kernel: /init Jun 20 19:14:56.001095 kernel: with environment: Jun 20 19:14:56.001104 kernel: HOME=/ Jun 20 19:14:56.001112 kernel: TERM=linux Jun 20 19:14:56.001121 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:14:56.001131 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:14:56.001145 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:14:56.001155 systemd[1]: Detected virtualization microsoft. Jun 20 19:14:56.001166 systemd[1]: Detected architecture x86-64. Jun 20 19:14:56.001176 systemd[1]: Running in initrd. Jun 20 19:14:56.001185 systemd[1]: No hostname configured, using default hostname. Jun 20 19:14:56.001194 systemd[1]: Hostname set to . Jun 20 19:14:56.001203 systemd[1]: Initializing machine ID from random generator. Jun 20 19:14:56.001212 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:14:56.001222 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:14:56.001231 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:14:56.001244 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:14:56.001254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:14:56.001263 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:14:56.001273 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:14:56.001284 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:14:56.001294 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:14:56.001303 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:14:56.001314 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:14:56.001323 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:14:56.001333 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:14:56.001342 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:14:56.001352 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:14:56.001360 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:14:56.001370 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:14:56.001379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:14:56.001388 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:14:56.001400 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:14:56.001409 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:14:56.001418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:14:56.001428 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:14:56.001438 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:14:56.001447 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:14:56.001456 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:14:56.001466 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:14:56.001477 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:14:56.001487 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:14:56.001497 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:14:56.001515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:56.001543 systemd-journald[205]: Collecting audit messages is disabled. Jun 20 19:14:56.001571 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:14:56.001583 systemd-journald[205]: Journal started Jun 20 19:14:56.001608 systemd-journald[205]: Runtime Journal (/run/log/journal/f4340d74a6a34079bc167e33b0313cb0) is 8M, max 158.9M, 150.9M free. Jun 20 19:14:56.008943 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:14:56.012172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:14:56.018090 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:14:56.021863 systemd-modules-load[207]: Inserted module 'overlay' Jun 20 19:14:56.027153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:14:56.040353 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:14:56.051119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:56.065946 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:14:56.064438 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:14:56.070220 systemd-modules-load[207]: Inserted module 'br_netfilter' Jun 20 19:14:56.074014 kernel: Bridge firewalling registered Jun 20 19:14:56.072534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:14:56.079118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:14:56.081325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:14:56.088323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:14:56.105064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:14:56.108029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:14:56.121086 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:14:56.127158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:14:56.129286 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:14:56.130264 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:14:56.134038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:14:56.158672 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:56.175371 systemd-resolved[244]: Positive Trust Anchors: Jun 20 19:14:56.176979 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:14:56.177018 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:14:56.180552 systemd-resolved[244]: Defaulting to hostname 'linux'. Jun 20 19:14:56.181517 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:14:56.181694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:14:56.250960 kernel: SCSI subsystem initialized Jun 20 19:14:56.258950 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:14:56.268954 kernel: iscsi: registered transport (tcp) Jun 20 19:14:56.287039 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:14:56.287100 kernel: QLogic iSCSI HBA Driver Jun 20 19:14:56.301673 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:14:56.312564 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:14:56.318546 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:14:56.350720 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:14:56.356020 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:14:56.404962 kernel: raid6: avx512x4 gen() 44917 MB/s Jun 20 19:14:56.422940 kernel: raid6: avx512x2 gen() 43598 MB/s Jun 20 19:14:56.440942 kernel: raid6: avx512x1 gen() 25510 MB/s Jun 20 19:14:56.457939 kernel: raid6: avx2x4 gen() 36030 MB/s Jun 20 19:14:56.474938 kernel: raid6: avx2x2 gen() 37937 MB/s Jun 20 19:14:56.493388 kernel: raid6: avx2x1 gen() 27206 MB/s Jun 20 19:14:56.493480 kernel: raid6: using algorithm avx512x4 gen() 44917 MB/s Jun 20 19:14:56.511453 kernel: raid6: .... xor() 7147 MB/s, rmw enabled Jun 20 19:14:56.511554 kernel: raid6: using avx512x2 recovery algorithm Jun 20 19:14:56.529953 kernel: xor: automatically using best checksumming function avx Jun 20 19:14:56.650955 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:14:56.656213 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:14:56.659078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:14:56.677441 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jun 20 19:14:56.681563 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:14:56.689394 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:14:56.721432 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jun 20 19:14:56.740113 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:14:56.743054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:14:56.774062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:14:56.781636 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:14:56.838099 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:14:56.848238 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:14:56.854061 kernel: AES CTR mode by8 optimization enabled Jun 20 19:14:56.849165 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:56.859023 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:56.872086 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 19:14:56.879774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:56.891816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:14:56.892043 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:56.900941 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 19:14:56.907316 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 19:14:56.907371 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 19:14:56.907945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:56.920944 kernel: PTP clock support registered Jun 20 19:14:56.925796 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 19:14:56.925839 kernel: hv_vmbus: registering driver hv_utils Jun 20 19:14:56.928310 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 19:14:56.929585 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 19:14:56.931526 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 19:14:56.804919 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 19:14:56.807885 systemd-journald[205]: Time jumped backwards, rotating. Jun 20 19:14:56.806523 systemd-resolved[244]: Clock change detected. Flushing caches. Jun 20 19:14:56.815594 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 19:14:56.817745 kernel: scsi host0: storvsc_host_t Jun 20 19:14:56.822206 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 19:14:56.822260 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:14:56.822271 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 19:14:56.828454 kernel: hv_vmbus: registering driver hv_pci Jun 20 19:14:56.835470 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 20 19:14:56.839374 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 19:14:56.846047 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 19:14:56.846373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:56.858691 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 19:14:56.858837 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdf22a97 (unnamed net_device) (uninitialized): VF slot 1 added Jun 20 19:14:56.858940 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 19:14:56.859041 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:14:56.859472 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 19:14:56.861450 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 20 19:14:56.865157 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 20 19:14:56.865334 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:14:56.880458 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 20 19:14:56.880503 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 20 19:14:56.888597 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 20 19:14:56.891523 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:14:56.891785 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 20 19:14:56.907453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#38 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:14:56.912075 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 20 19:14:56.912280 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 20 19:14:56.931466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#62 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:14:57.168483 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 19:14:57.174453 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:57.440459 kernel: nvme nvme0: using unchecked data buffer Jun 20 19:14:57.640984 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 20 19:14:57.677142 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:14:57.712806 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:14:57.713855 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:14:57.714357 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:14:57.726789 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 20 19:14:57.727991 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:14:57.728832 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:14:57.728854 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:14:57.730548 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:14:57.732527 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:14:57.761347 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:14:57.767457 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:57.876611 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 20 19:14:57.884460 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 20 19:14:57.888728 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 20 19:14:57.891645 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:14:57.915285 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 20 19:14:57.915363 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 20 19:14:57.915381 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 20 19:14:57.915402 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 20 19:14:57.931460 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:14:57.941309 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 20 19:14:57.941578 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 20 19:14:57.952530 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 20 19:14:57.960448 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jun 20 19:14:57.963964 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdf22a97 eth0: VF registering: eth1 Jun 20 19:14:57.964121 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jun 20 19:14:57.968458 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jun 20 19:14:58.782498 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:58.783027 disk-uuid[681]: The operation has completed successfully. Jun 20 19:14:58.832682 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:14:58.832779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:14:58.871831 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:14:58.891754 sh[720]: Success Jun 20 19:14:58.922799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:14:58.922872 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:14:58.924898 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:14:58.934471 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:14:59.174241 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:14:59.181522 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:14:59.196041 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:14:59.208186 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:14:59.208234 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (733) Jun 20 19:14:59.211669 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:14:59.211771 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:59.212718 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:14:59.548699 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:14:59.552900 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:14:59.556720 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:14:59.557412 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:14:59.567560 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:14:59.598480 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (766) Jun 20 19:14:59.600454 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:59.603025 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:59.603060 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:14:59.647570 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:59.648022 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:14:59.654780 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:14:59.659569 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:14:59.673184 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:14:59.714657 systemd-networkd[902]: lo: Link UP Jun 20 19:14:59.714663 systemd-networkd[902]: lo: Gained carrier Jun 20 19:14:59.721755 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:14:59.716735 systemd-networkd[902]: Enumeration completed Jun 20 19:14:59.717222 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:14:59.717941 systemd[1]: Reached target network.target - Network. Jun 20 19:14:59.718240 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:59.718244 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:14:59.734823 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:14:59.735023 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdf22a97 eth0: Data path switched to VF: enP30832s1 Jun 20 19:14:59.734888 systemd-networkd[902]: enP30832s1: Link UP Jun 20 19:14:59.734954 systemd-networkd[902]: eth0: Link UP Jun 20 19:14:59.735039 systemd-networkd[902]: eth0: Gained carrier Jun 20 19:14:59.735049 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:59.739620 systemd-networkd[902]: enP30832s1: Gained carrier Jun 20 19:14:59.753468 systemd-networkd[902]: eth0: DHCPv4 address 10.200.4.21/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:15:00.542294 ignition[901]: Ignition 2.21.0 Jun 20 19:15:00.542306 ignition[901]: Stage: fetch-offline Jun 20 19:15:00.544977 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:15:00.542407 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:15:00.548374 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:15:00.542414 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:15:00.542528 ignition[901]: parsed url from cmdline: "" Jun 20 19:15:00.542531 ignition[901]: no config URL provided Jun 20 19:15:00.542536 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:15:00.542542 ignition[901]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:15:00.542547 ignition[901]: failed to fetch config: resource requires networking Jun 20 19:15:00.543585 ignition[901]: Ignition finished successfully Jun 20 19:15:00.575522 ignition[912]: Ignition 2.21.0 Jun 20 19:15:00.575532 ignition[912]: Stage: fetch Jun 20 19:15:00.575756 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:15:00.575765 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:15:00.575850 ignition[912]: parsed url from cmdline: "" Jun 20 19:15:00.575853 ignition[912]: no config URL provided Jun 20 19:15:00.575858 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:15:00.575864 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:15:00.575908 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 19:15:00.647092 ignition[912]: GET result: OK Jun 20 19:15:00.647902 ignition[912]: config has been read from IMDS userdata Jun 20 19:15:00.647932 ignition[912]: parsing config with SHA512: 82d2acf0ea822569f2a9d203e19e27145dcec878f8aaf89e4fe1687c5b0f593566526459ca59402d3fa3fa80f4dc415fb2f9bd2acf6bdeb97b76f09eba584fa9 Jun 20 19:15:00.653859 unknown[912]: fetched base config from "system" Jun 20 19:15:00.654522 unknown[912]: fetched base config from "system" Jun 20 19:15:00.654892 ignition[912]: fetch: fetch complete Jun 20 19:15:00.654529 unknown[912]: fetched user config from "azure" Jun 20 19:15:00.654897 ignition[912]: fetch: fetch passed Jun 20 19:15:00.658720 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:15:00.654944 ignition[912]: Ignition finished successfully Jun 20 19:15:00.662537 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:15:00.691097 ignition[919]: Ignition 2.21.0 Jun 20 19:15:00.691108 ignition[919]: Stage: kargs Jun 20 19:15:00.691343 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:15:00.691351 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:15:00.694751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:15:00.692356 ignition[919]: kargs: kargs passed Jun 20 19:15:00.698882 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:15:00.692395 ignition[919]: Ignition finished successfully Jun 20 19:15:00.732668 ignition[927]: Ignition 2.21.0 Jun 20 19:15:00.732679 ignition[927]: Stage: disks Jun 20 19:15:00.734943 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:15:00.732891 ignition[927]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:15:00.736318 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:15:00.732899 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:15:00.739388 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:15:00.733712 ignition[927]: disks: disks passed Jun 20 19:15:00.746690 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:15:00.733750 ignition[927]: Ignition finished successfully Jun 20 19:15:00.756566 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:15:00.759713 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:15:00.762834 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:15:00.849139 systemd-fsck[935]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 19:15:00.854421 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:15:00.859147 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:15:01.133455 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:15:01.134340 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:15:01.136966 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:15:01.155685 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:15:01.161521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:15:01.171607 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:15:01.176990 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:15:01.179509 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:15:01.181776 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:15:01.188896 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:15:01.196228 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (944) Jun 20 19:15:01.196258 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:15:01.198091 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:15:01.201456 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:15:01.206189 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:15:01.505559 systemd-networkd[902]: enP30832s1: Gained IPv6LL Jun 20 19:15:01.697582 systemd-networkd[902]: eth0: Gained IPv6LL Jun 20 19:15:01.717157 coreos-metadata[946]: Jun 20 19:15:01.717 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:15:01.726298 coreos-metadata[946]: Jun 20 19:15:01.726 INFO Fetch successful Jun 20 19:15:01.726298 coreos-metadata[946]: Jun 20 19:15:01.726 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:15:01.738866 coreos-metadata[946]: Jun 20 19:15:01.738 INFO Fetch successful Jun 20 19:15:01.755731 coreos-metadata[946]: Jun 20 19:15:01.755 INFO wrote hostname ci-4344.1.0-a-a13bac16ef to /sysroot/etc/hostname Jun 20 19:15:01.758817 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:15:01.812331 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:15:01.850709 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:15:01.869870 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:15:01.874728 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:15:02.968197 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:15:02.972304 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:15:02.988574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:15:02.996288 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:15:02.998122 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:15:03.032835 ignition[1065]: INFO : Ignition 2.21.0 Jun 20 19:15:03.032835 ignition[1065]: INFO : Stage: mount Jun 20 19:15:03.037605 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:15:03.037605 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:15:03.037605 ignition[1065]: INFO : mount: mount passed Jun 20 19:15:03.037605 ignition[1065]: INFO : Ignition finished successfully Jun 20 19:15:03.037102 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:15:03.037999 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:15:03.045743 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:15:03.067032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:15:03.091078 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1077) Jun 20 19:15:03.091125 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:15:03.092552 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:15:03.092567 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:15:03.103036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:15:03.138254 ignition[1094]: INFO : Ignition 2.21.0 Jun 20 19:15:03.138254 ignition[1094]: INFO : Stage: files Jun 20 19:15:03.144485 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:15:03.144485 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:15:03.144485 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:15:03.157293 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:15:03.157293 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:15:03.216062 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:15:03.218466 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:15:03.220844 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:15:03.218502 unknown[1094]: wrote ssh authorized keys file for user: core Jun 20 19:15:03.226030 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:15:03.226030 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 20 19:15:03.263849 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:15:03.346633 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:15:03.346633 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:15:03.354499 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:15:03.738427 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:15:03.798062 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:15:03.801532 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:15:03.801532 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:15:03.801532 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:15:03.801532 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:15:03.801532 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:15:03.801532 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:15:03.801532 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:15:03.801532 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:15:03.829511 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:15:03.829511 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:15:03.829511 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:15:03.838903 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:15:03.838903 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:15:03.838903 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 20 19:15:04.471766 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:15:04.756061 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:15:04.756061 ignition[1094]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:15:04.790053 ignition[1094]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:15:04.798428 ignition[1094]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:15:04.798428 ignition[1094]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:15:04.804551 ignition[1094]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:15:04.804551 ignition[1094]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:15:04.804551 ignition[1094]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:15:04.804551 ignition[1094]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:15:04.804551 ignition[1094]: INFO : files: files passed Jun 20 19:15:04.804551 ignition[1094]: INFO : Ignition finished successfully Jun 20 19:15:04.804158 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:15:04.819384 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:15:04.823898 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:15:04.839864 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:15:04.839963 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:15:04.859059 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:15:04.859059 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:15:04.866487 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:15:04.864160 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:15:04.867290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:15:04.876314 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:15:04.910918 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:15:04.911017 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:15:04.914399 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:15:04.920522 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:15:04.924522 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:15:04.925542 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:15:04.948378 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:15:04.953345 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:15:04.972567 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:15:04.977578 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:15:04.979546 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:15:04.979815 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:15:04.979932 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:15:04.980479 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:15:04.981036 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:15:04.981750 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:15:04.982356 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:15:04.982911 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:15:04.983299 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:15:04.983682 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:15:05.055452 ignition[1148]: INFO : Ignition 2.21.0 Jun 20 19:15:05.055452 ignition[1148]: INFO : Stage: umount Jun 20 19:15:05.055452 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:15:05.055452 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:15:05.055452 ignition[1148]: INFO : umount: umount passed Jun 20 19:15:05.055452 ignition[1148]: INFO : Ignition finished successfully Jun 20 19:15:04.983928 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:15:04.984609 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:15:04.984974 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:15:04.985403 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:15:04.985709 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:15:04.985820 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:15:04.986440 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:15:04.986822 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:15:04.987133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:15:04.988895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:15:05.008369 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:15:05.008540 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:15:05.010329 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:15:05.010461 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:15:05.010716 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:15:05.010832 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:15:05.011112 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:15:05.011212 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:15:05.013540 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:15:05.013668 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:15:05.013805 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:15:05.025206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:15:05.030096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:15:05.030259 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:15:05.037223 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:15:05.037377 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:15:05.055146 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:15:05.055243 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:15:05.056046 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:15:05.056125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:15:05.061868 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:15:05.061949 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:15:05.071536 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:15:05.071582 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:15:05.074497 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:15:05.074532 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:15:05.076812 systemd[1]: Stopped target network.target - Network. Jun 20 19:15:05.076846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:15:05.076885 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:15:05.077124 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:15:05.077144 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:15:05.082107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:15:05.088630 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:15:05.090939 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:15:05.095198 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:15:05.095228 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:15:05.223695 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdf22a97 eth0: Data path switched from VF: enP30832s1 Jun 20 19:15:05.107756 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:15:05.229519 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:15:05.107787 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:15:05.116506 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:15:05.116567 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:15:05.121526 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:15:05.121565 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:15:05.126655 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:15:05.128747 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:15:05.141684 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:15:05.141772 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:15:05.146780 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:15:05.146971 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:15:05.147055 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:15:05.153347 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:15:05.154180 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:15:05.160082 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:15:05.160118 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:15:05.163921 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:15:05.168513 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:15:05.168569 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:15:05.168831 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:15:05.168865 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:15:05.169968 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:15:05.170002 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:15:05.176534 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:15:05.176580 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:15:05.176853 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:15:05.203891 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:15:05.203991 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:15:05.204037 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:15:05.209985 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:15:05.214773 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:15:05.221227 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:15:05.221276 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:15:05.224358 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:15:05.224597 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:15:05.231603 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:15:05.231658 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:15:05.284539 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:15:05.284644 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:15:05.289124 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:15:05.289183 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:15:05.294573 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:15:05.294630 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:15:05.294690 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:15:05.298131 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:15:05.298184 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:15:05.298890 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:15:05.298926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:15:05.304537 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:15:05.304598 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:15:05.304639 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:15:05.304953 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:15:05.305261 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:15:05.313739 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:15:05.313815 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:15:05.335750 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:15:05.335840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:15:05.338296 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:15:05.338551 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:15:05.338629 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:15:05.340149 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:15:05.356848 systemd[1]: Switching root. Jun 20 19:15:05.423218 systemd-journald[205]: Journal stopped Jun 20 19:15:09.347982 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jun 20 19:15:09.348021 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:15:09.348038 kernel: SELinux: policy capability open_perms=1 Jun 20 19:15:09.348048 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:15:09.348058 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:15:09.348068 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:15:09.348087 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:15:09.348099 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:15:09.348108 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:15:09.348119 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:15:09.348128 kernel: audit: type=1403 audit(1750446906.754:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:15:09.348141 systemd[1]: Successfully loaded SELinux policy in 121.097ms. Jun 20 19:15:09.348154 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.088ms. Jun 20 19:15:09.348171 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:15:09.348185 systemd[1]: Detected virtualization microsoft. Jun 20 19:15:09.348195 systemd[1]: Detected architecture x86-64. Jun 20 19:15:09.348206 systemd[1]: Detected first boot. Jun 20 19:15:09.348217 systemd[1]: Hostname set to . Jun 20 19:15:09.348234 systemd[1]: Initializing machine ID from random generator. Jun 20 19:15:09.348247 zram_generator::config[1191]: No configuration found. Jun 20 19:15:09.348261 kernel: Guest personality initialized and is inactive Jun 20 19:15:09.348272 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 20 19:15:09.348284 kernel: Initialized host personality Jun 20 19:15:09.348296 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:15:09.348310 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:15:09.348328 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:15:09.348343 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:15:09.348357 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:15:09.348368 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:15:09.348381 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:15:09.348392 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:15:09.348402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:15:09.348413 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:15:09.349520 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:15:09.349539 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:15:09.349550 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:15:09.349561 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:15:09.349571 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:15:09.349582 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:15:09.349593 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:15:09.349608 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:15:09.349621 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:15:09.349632 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:15:09.349643 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:15:09.349654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:15:09.349664 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:15:09.349675 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:15:09.349685 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:15:09.349699 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:15:09.349709 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:15:09.349720 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:15:09.349731 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:15:09.349741 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:15:09.349752 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:15:09.349762 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:15:09.349772 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:15:09.349785 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:15:09.349796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:15:09.349807 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:15:09.349818 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:15:09.349828 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:15:09.349840 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:15:09.349851 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:15:09.349861 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:15:09.349872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:09.349883 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:15:09.349893 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:15:09.349904 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:15:09.349916 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:15:09.349929 systemd[1]: Reached target machines.target - Containers. Jun 20 19:15:09.349939 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:15:09.349950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:15:09.349961 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:15:09.349971 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:15:09.349982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:15:09.349992 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:15:09.350003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:15:09.350013 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:15:09.350026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:15:09.350036 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:15:09.350047 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:15:09.350058 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:15:09.350069 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:15:09.350079 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:15:09.350090 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:15:09.350100 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:15:09.350112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:15:09.350123 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:15:09.350134 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:15:09.350145 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:15:09.350156 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:15:09.350167 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:15:09.350177 systemd[1]: Stopped verity-setup.service. Jun 20 19:15:09.350188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:09.350200 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:15:09.350211 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:15:09.350222 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:15:09.350233 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:15:09.350269 systemd-journald[1284]: Collecting audit messages is disabled. Jun 20 19:15:09.350297 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:15:09.350308 systemd-journald[1284]: Journal started Jun 20 19:15:09.350334 systemd-journald[1284]: Runtime Journal (/run/log/journal/5eb2bedf3424439bb402fbc53a9e18e8) is 8M, max 158.9M, 150.9M free. Jun 20 19:15:08.911571 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:15:08.923181 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 19:15:08.923586 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:15:09.354447 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:15:09.356512 kernel: loop: module loaded Jun 20 19:15:09.357756 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:15:09.359380 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:15:09.362795 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:15:09.371408 kernel: fuse: init (API version 7.41) Jun 20 19:15:09.365570 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:15:09.365776 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:15:09.368943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:15:09.369107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:15:09.373021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:15:09.373266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:15:09.376444 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:15:09.376693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:15:09.379744 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:15:09.379974 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:15:09.383087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:15:09.386509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:15:09.390069 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:15:09.393531 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:15:09.406269 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:15:09.412521 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:15:09.425555 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:15:09.428163 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:15:09.428200 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:15:09.433799 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:15:09.438077 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:15:09.440159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:15:09.443406 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:15:09.455876 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:15:09.457932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:15:09.460414 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:15:09.462637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:15:09.463594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:15:09.468881 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:15:09.478582 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:15:09.484613 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:15:09.488812 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:15:09.493050 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:15:09.505955 systemd-journald[1284]: Time spent on flushing to /var/log/journal/5eb2bedf3424439bb402fbc53a9e18e8 is 48.914ms for 987 entries. Jun 20 19:15:09.505955 systemd-journald[1284]: System Journal (/var/log/journal/5eb2bedf3424439bb402fbc53a9e18e8) is 11.8M, max 2.6G, 2.6G free. Jun 20 19:15:09.631548 systemd-journald[1284]: Received client request to flush runtime journal. Jun 20 19:15:09.631608 kernel: ACPI: bus type drm_connector registered Jun 20 19:15:09.631624 systemd-journald[1284]: /var/log/journal/5eb2bedf3424439bb402fbc53a9e18e8/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 20 19:15:09.631642 kernel: loop0: detected capacity change from 0 to 28496 Jun 20 19:15:09.631653 systemd-journald[1284]: Rotating system journal. Jun 20 19:15:09.505800 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:15:09.509887 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:15:09.515932 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:15:09.531747 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:15:09.531941 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:15:09.563196 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:15:09.633113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:15:09.640579 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:15:09.815686 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:15:09.819725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:15:09.906468 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:15:09.922256 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Jun 20 19:15:09.922275 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Jun 20 19:15:09.925040 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:15:09.926834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:15:09.939467 kernel: loop1: detected capacity change from 0 to 221472 Jun 20 19:15:09.985459 kernel: loop2: detected capacity change from 0 to 113872 Jun 20 19:15:10.340381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:15:10.344893 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:15:10.350545 kernel: loop3: detected capacity change from 0 to 146240 Jun 20 19:15:10.381548 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jun 20 19:15:10.548122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:15:10.555934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:15:10.612139 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:15:10.636582 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:15:10.676461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#247 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:15:10.719473 kernel: loop4: detected capacity change from 0 to 28496 Jun 20 19:15:10.736037 kernel: hv_vmbus: registering driver hv_balloon Jun 20 19:15:10.736123 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:15:10.740525 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 19:15:10.750486 kernel: loop5: detected capacity change from 0 to 221472 Jun 20 19:15:10.753460 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 19:15:10.757256 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 19:15:10.757315 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 19:15:10.759035 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:15:10.763917 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:15:10.778486 kernel: loop6: detected capacity change from 0 to 113872 Jun 20 19:15:10.792155 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:15:10.796535 kernel: loop7: detected capacity change from 0 to 146240 Jun 20 19:15:10.819978 (sd-merge)[1405]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 19:15:10.840474 (sd-merge)[1405]: Merged extensions into '/usr'. Jun 20 19:15:10.850960 systemd[1]: Reload requested from client PID 1331 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:15:10.850978 systemd[1]: Reloading... Jun 20 19:15:10.972475 zram_generator::config[1457]: No configuration found. Jun 20 19:15:11.157229 systemd-networkd[1368]: lo: Link UP Jun 20 19:15:11.157238 systemd-networkd[1368]: lo: Gained carrier Jun 20 19:15:11.163422 systemd-networkd[1368]: Enumeration completed Jun 20 19:15:11.163799 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:15:11.163810 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:15:11.167469 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:15:11.173457 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:15:11.177741 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdf22a97 eth0: Data path switched to VF: enP30832s1 Jun 20 19:15:11.179962 systemd-networkd[1368]: enP30832s1: Link UP Jun 20 19:15:11.180040 systemd-networkd[1368]: eth0: Link UP Jun 20 19:15:11.180042 systemd-networkd[1368]: eth0: Gained carrier Jun 20 19:15:11.180061 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:15:11.188675 systemd-networkd[1368]: enP30832s1: Gained carrier Jun 20 19:15:11.198809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:15:11.204451 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 20 19:15:11.210485 systemd-networkd[1368]: eth0: DHCPv4 address 10.200.4.21/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:15:11.328503 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:15:11.331633 systemd[1]: Reloading finished in 480 ms. Jun 20 19:15:11.348009 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:15:11.350775 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:15:11.392454 systemd[1]: Starting ensure-sysext.service... Jun 20 19:15:11.394544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:15:11.400073 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:15:11.406349 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:15:11.412349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:15:11.416795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:15:11.434359 systemd[1]: Reload requested from client PID 1528 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:15:11.434380 systemd[1]: Reloading... Jun 20 19:15:11.441861 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:15:11.442146 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:15:11.442466 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:15:11.442745 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:15:11.443532 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:15:11.443832 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. Jun 20 19:15:11.443932 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. Jun 20 19:15:11.480421 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:15:11.480475 systemd-tmpfiles[1532]: Skipping /boot Jun 20 19:15:11.497458 zram_generator::config[1568]: No configuration found. Jun 20 19:15:11.508381 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:15:11.508400 systemd-tmpfiles[1532]: Skipping /boot Jun 20 19:15:11.602630 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:15:11.701884 systemd[1]: Reloading finished in 267 ms. Jun 20 19:15:11.730327 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:15:11.733922 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:15:11.734262 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:15:11.743292 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:15:11.747488 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:15:11.752666 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:15:11.755697 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:15:11.758590 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:15:11.761979 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:11.762155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:15:11.765689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:15:11.769693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:15:11.774820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:15:11.775071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:15:11.775255 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:15:11.775399 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:11.779007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:11.779641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:15:11.779945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:15:11.780143 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:15:11.780229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:11.786183 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:11.787733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:15:11.788664 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:15:11.788849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:15:11.788946 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:15:11.789100 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:15:11.789392 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:11.790160 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:15:11.792111 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:15:11.804413 systemd[1]: Finished ensure-sysext.service. Jun 20 19:15:11.812258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:15:11.812385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:15:11.812845 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:15:11.812987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:15:11.817300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:15:11.818138 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:15:11.818870 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:15:11.822122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:15:11.825669 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:15:11.825841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:15:11.851055 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:15:11.892420 systemd-resolved[1634]: Positive Trust Anchors: Jun 20 19:15:11.892722 systemd-resolved[1634]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:15:11.892750 systemd-resolved[1634]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:15:11.896823 systemd-resolved[1634]: Using system hostname 'ci-4344.1.0-a-a13bac16ef'. Jun 20 19:15:11.898622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:15:11.900415 systemd[1]: Reached target network.target - Network. Jun 20 19:15:11.902071 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:15:11.942142 augenrules[1670]: No rules Jun 20 19:15:11.943243 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:15:11.943454 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:15:12.211480 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:15:12.215813 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:15:12.705569 systemd-networkd[1368]: eth0: Gained IPv6LL Jun 20 19:15:12.707898 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:15:12.711701 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:15:13.153588 systemd-networkd[1368]: enP30832s1: Gained IPv6LL Jun 20 19:15:13.824682 ldconfig[1326]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:15:13.835097 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:15:13.838055 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:15:13.856028 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:15:13.857932 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:15:13.860755 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:15:13.863514 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:15:13.866485 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:15:13.868328 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:15:13.869938 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:15:13.873484 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:15:13.876478 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:15:13.876507 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:15:13.877803 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:15:13.895250 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:15:13.898160 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:15:13.901887 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:15:13.905606 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:15:13.907513 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:15:13.912190 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:15:13.914068 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:15:13.918085 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:15:13.922236 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:15:13.923587 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:15:13.924940 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:15:13.924966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:15:13.927423 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 19:15:13.932136 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:15:13.942373 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:15:13.947497 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:15:13.954366 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:15:13.959560 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:15:13.967697 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:15:13.970227 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:15:13.972619 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:15:13.975721 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 20 19:15:13.976593 jq[1691]: false Jun 20 19:15:13.978546 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 19:15:13.982595 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 19:15:13.983814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:13.989851 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:15:13.995206 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:15:13.997683 KVP[1694]: KVP starting; pid is:1694 Jun 20 19:15:14.002613 kernel: hv_utils: KVP IC version 4.0 Jun 20 19:15:14.002239 KVP[1694]: KVP LIC Version: 3.1 Jun 20 19:15:14.003215 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:15:14.011602 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:15:14.015880 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:15:14.029898 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:15:14.033340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:15:14.033939 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:15:14.035108 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:15:14.038418 extend-filesystems[1692]: Found /dev/nvme0n1p6 Jun 20 19:15:14.040358 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:15:14.048509 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:15:14.051577 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:15:14.051842 oslogin_cache_refresh[1693]: Refreshing passwd entry cache Jun 20 19:15:14.052425 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Refreshing passwd entry cache Jun 20 19:15:14.051762 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:15:14.057153 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:15:14.058933 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:15:14.071965 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Failure getting users, quitting Jun 20 19:15:14.071965 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:15:14.071965 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Refreshing group entry cache Jun 20 19:15:14.070256 oslogin_cache_refresh[1693]: Failure getting users, quitting Jun 20 19:15:14.070274 oslogin_cache_refresh[1693]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:15:14.070317 oslogin_cache_refresh[1693]: Refreshing group entry cache Jun 20 19:15:14.074775 (chronyd)[1683]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 19:15:14.080044 extend-filesystems[1692]: Found /dev/nvme0n1p9 Jun 20 19:15:14.085793 chronyd[1733]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 19:15:14.084551 (ntainerd)[1730]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:15:14.089788 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 19:15:14.088497 chronyd[1733]: Timezone right/UTC failed leap second check, ignoring Jun 20 19:15:14.088656 chronyd[1733]: Loaded seccomp filter (level 2) Jun 20 19:15:14.091767 extend-filesystems[1692]: Checking size of /dev/nvme0n1p9 Jun 20 19:15:14.096644 jq[1711]: true Jun 20 19:15:14.100093 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Failure getting groups, quitting Jun 20 19:15:14.100093 google_oslogin_nss_cache[1693]: oslogin_cache_refresh[1693]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:15:14.097623 oslogin_cache_refresh[1693]: Failure getting groups, quitting Jun 20 19:15:14.097636 oslogin_cache_refresh[1693]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:15:14.100614 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:15:14.101209 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:15:14.104191 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:15:14.106929 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:15:14.107119 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:15:14.133842 update_engine[1709]: I20250620 19:15:14.133754 1709 main.cc:92] Flatcar Update Engine starting Jun 20 19:15:14.140416 tar[1716]: linux-amd64/helm Jun 20 19:15:14.152425 jq[1741]: true Jun 20 19:15:14.161683 extend-filesystems[1692]: Old size kept for /dev/nvme0n1p9 Jun 20 19:15:14.156890 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:15:14.158092 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:15:14.225756 dbus-daemon[1686]: [system] SELinux support is enabled Jun 20 19:15:14.226146 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:15:14.232964 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:15:14.233746 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:15:14.236775 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:15:14.236799 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:15:14.248314 update_engine[1709]: I20250620 19:15:14.246943 1709 update_check_scheduler.cc:74] Next update check in 4m46s Jun 20 19:15:14.250851 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:15:14.257393 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:15:14.336144 bash[1777]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:15:14.338005 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:15:14.344841 systemd-logind[1706]: New seat seat0. Jun 20 19:15:14.354308 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:15:14.355965 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:15:14.361100 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:15:14.375886 coreos-metadata[1685]: Jun 20 19:15:14.375 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:15:14.391242 coreos-metadata[1685]: Jun 20 19:15:14.390 INFO Fetch successful Jun 20 19:15:14.391242 coreos-metadata[1685]: Jun 20 19:15:14.391 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 19:15:14.401771 coreos-metadata[1685]: Jun 20 19:15:14.399 INFO Fetch successful Jun 20 19:15:14.401771 coreos-metadata[1685]: Jun 20 19:15:14.400 INFO Fetching http://168.63.129.16/machine/1c34519b-c0c0-4566-965c-c2452325c668/cd6e265f%2Debfd%2D45b5%2D9b2a%2De3becf5de565.%5Fci%2D4344.1.0%2Da%2Da13bac16ef?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 19:15:14.406500 coreos-metadata[1685]: Jun 20 19:15:14.406 INFO Fetch successful Jun 20 19:15:14.406500 coreos-metadata[1685]: Jun 20 19:15:14.406 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:15:14.425017 coreos-metadata[1685]: Jun 20 19:15:14.422 INFO Fetch successful Jun 20 19:15:14.482735 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:15:14.486949 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:15:14.514841 sshd_keygen[1724]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:15:14.515710 locksmithd[1775]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:15:14.540391 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:15:14.547820 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:15:14.551683 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 19:15:14.583669 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:15:14.583908 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:15:14.600725 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:15:14.638181 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 19:15:14.648620 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:15:14.655967 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:15:14.663765 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:15:14.666364 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:15:14.991242 containerd[1730]: time="2025-06-20T19:15:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:15:14.993469 containerd[1730]: time="2025-06-20T19:15:14.993262652Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:15:15.009463 containerd[1730]: time="2025-06-20T19:15:15.008489106Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.372µs" Jun 20 19:15:15.009463 containerd[1730]: time="2025-06-20T19:15:15.008533448Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:15:15.009463 containerd[1730]: time="2025-06-20T19:15:15.008554438Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:15:15.009715 containerd[1730]: time="2025-06-20T19:15:15.009693753Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:15:15.009737 containerd[1730]: time="2025-06-20T19:15:15.009722087Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:15:15.009761 containerd[1730]: time="2025-06-20T19:15:15.009750463Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:15:15.009905 containerd[1730]: time="2025-06-20T19:15:15.009887844Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:15:15.009929 containerd[1730]: time="2025-06-20T19:15:15.009905612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:15:15.010769 containerd[1730]: time="2025-06-20T19:15:15.010745163Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:15:15.011627 containerd[1730]: time="2025-06-20T19:15:15.010821845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:15:15.011730 containerd[1730]: time="2025-06-20T19:15:15.011707472Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:15:15.011786 containerd[1730]: time="2025-06-20T19:15:15.011776976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:15:15.011931 containerd[1730]: time="2025-06-20T19:15:15.011920189Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:15:15.012203 containerd[1730]: time="2025-06-20T19:15:15.012185548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:15:15.012282 containerd[1730]: time="2025-06-20T19:15:15.012268207Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:15:15.012330 containerd[1730]: time="2025-06-20T19:15:15.012320602Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:15:15.012397 containerd[1730]: time="2025-06-20T19:15:15.012387507Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:15:15.014257 containerd[1730]: time="2025-06-20T19:15:15.013970856Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:15:15.014257 containerd[1730]: time="2025-06-20T19:15:15.014062036Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:15:15.029093 containerd[1730]: time="2025-06-20T19:15:15.029058919Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:15:15.029405 containerd[1730]: time="2025-06-20T19:15:15.029388706Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:15:15.029513 containerd[1730]: time="2025-06-20T19:15:15.029502169Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:15:15.029560 containerd[1730]: time="2025-06-20T19:15:15.029551376Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:15:15.029600 containerd[1730]: time="2025-06-20T19:15:15.029592915Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:15:15.029654 containerd[1730]: time="2025-06-20T19:15:15.029644401Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:15:15.029730 containerd[1730]: time="2025-06-20T19:15:15.029723250Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:15:15.029910 containerd[1730]: time="2025-06-20T19:15:15.029904874Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:15:15.029936 containerd[1730]: time="2025-06-20T19:15:15.029930939Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:15:15.029960 containerd[1730]: time="2025-06-20T19:15:15.029955427Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:15:15.029991 containerd[1730]: time="2025-06-20T19:15:15.029984261Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:15:15.030029 containerd[1730]: time="2025-06-20T19:15:15.030021824Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:15:15.030169 containerd[1730]: time="2025-06-20T19:15:15.030161194Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:15:15.030247 containerd[1730]: time="2025-06-20T19:15:15.030239676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:15:15.030422 containerd[1730]: time="2025-06-20T19:15:15.030413130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:15:15.030489 containerd[1730]: time="2025-06-20T19:15:15.030480731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:15:15.030528 containerd[1730]: time="2025-06-20T19:15:15.030520548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:15:15.030564 containerd[1730]: time="2025-06-20T19:15:15.030557543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:15:15.030604 containerd[1730]: time="2025-06-20T19:15:15.030596788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:15:15.030644 containerd[1730]: time="2025-06-20T19:15:15.030637814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:15:15.030680 containerd[1730]: time="2025-06-20T19:15:15.030673595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:15:15.030714 containerd[1730]: time="2025-06-20T19:15:15.030707634Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:15:15.031760 containerd[1730]: time="2025-06-20T19:15:15.031232554Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:15:15.031760 containerd[1730]: time="2025-06-20T19:15:15.031295939Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:15:15.031760 containerd[1730]: time="2025-06-20T19:15:15.031307237Z" level=info msg="Start snapshots syncer" Jun 20 19:15:15.031760 containerd[1730]: time="2025-06-20T19:15:15.031334207Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:15:15.031877 containerd[1730]: time="2025-06-20T19:15:15.031656807Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:15:15.031877 containerd[1730]: time="2025-06-20T19:15:15.031709601Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:15:15.032050 containerd[1730]: time="2025-06-20T19:15:15.032040181Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:15:15.032202 containerd[1730]: time="2025-06-20T19:15:15.032193199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:15:15.032247 containerd[1730]: time="2025-06-20T19:15:15.032239467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:15:15.032284 containerd[1730]: time="2025-06-20T19:15:15.032277412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:15:15.032318 containerd[1730]: time="2025-06-20T19:15:15.032312020Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:15:15.032358 containerd[1730]: time="2025-06-20T19:15:15.032351442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:15:15.032394 containerd[1730]: time="2025-06-20T19:15:15.032387313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:15:15.032522 containerd[1730]: time="2025-06-20T19:15:15.032429195Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:15:15.032522 containerd[1730]: time="2025-06-20T19:15:15.032475534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:15:15.032522 containerd[1730]: time="2025-06-20T19:15:15.032488343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:15:15.032522 containerd[1730]: time="2025-06-20T19:15:15.032499746Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:15:15.032674 containerd[1730]: time="2025-06-20T19:15:15.032620534Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:15:15.032674 containerd[1730]: time="2025-06-20T19:15:15.032637806Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:15:15.032674 containerd[1730]: time="2025-06-20T19:15:15.032646850Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:15:15.032674 containerd[1730]: time="2025-06-20T19:15:15.032656533Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:15:15.032896 containerd[1730]: time="2025-06-20T19:15:15.032665260Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:15:15.032896 containerd[1730]: time="2025-06-20T19:15:15.032795609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:15:15.032896 containerd[1730]: time="2025-06-20T19:15:15.032806834Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:15:15.032896 containerd[1730]: time="2025-06-20T19:15:15.032823343Z" level=info msg="runtime interface created" Jun 20 19:15:15.032896 containerd[1730]: time="2025-06-20T19:15:15.032828648Z" level=info msg="created NRI interface" Jun 20 19:15:15.032896 containerd[1730]: time="2025-06-20T19:15:15.032837624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:15:15.032896 containerd[1730]: time="2025-06-20T19:15:15.032849692Z" level=info msg="Connect containerd service" Jun 20 19:15:15.032896 containerd[1730]: time="2025-06-20T19:15:15.032874367Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:15:15.034126 containerd[1730]: time="2025-06-20T19:15:15.033824581Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:15:15.036980 tar[1716]: linux-amd64/LICENSE Jun 20 19:15:15.037119 tar[1716]: linux-amd64/README.md Jun 20 19:15:15.049523 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:15:15.441802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:15.447808 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798663660Z" level=info msg="Start subscribing containerd event" Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798824302Z" level=info msg="Start recovering state" Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798933128Z" level=info msg="Start event monitor" Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798946970Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798954035Z" level=info msg="Start streaming server" Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798967074Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798975786Z" level=info msg="runtime interface starting up..." Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798982298Z" level=info msg="starting plugins..." Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.798994375Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.799045516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:15:15.799764 containerd[1730]: time="2025-06-20T19:15:15.799121755Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:15:15.799314 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:15:15.802276 containerd[1730]: time="2025-06-20T19:15:15.801691401Z" level=info msg="containerd successfully booted in 0.810808s" Jun 20 19:15:15.802407 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:15:15.805362 systemd[1]: Startup finished in 3.192s (kernel) + 11.037s (initrd) + 9.169s (userspace) = 23.399s. Jun 20 19:15:15.974595 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:15:15.976658 login[1826]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:15:15.998006 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:15:15.999042 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:15:16.005755 systemd-logind[1706]: New session 2 of user core. Jun 20 19:15:16.010229 systemd-logind[1706]: New session 1 of user core. Jun 20 19:15:16.026247 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:15:16.029081 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:15:16.040738 (systemd)[1861]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:15:16.043102 systemd-logind[1706]: New session c1 of user core. Jun 20 19:15:16.052540 kubelet[1845]: E0620 19:15:16.052470 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:16.054940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:16.055068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:16.056747 systemd[1]: kubelet.service: Consumed 966ms CPU time, 263.7M memory peak. Jun 20 19:15:16.237597 systemd[1861]: Queued start job for default target default.target. Jun 20 19:15:16.244882 systemd[1861]: Created slice app.slice - User Application Slice. Jun 20 19:15:16.245187 systemd[1861]: Reached target paths.target - Paths. Jun 20 19:15:16.245305 systemd[1861]: Reached target timers.target - Timers. Jun 20 19:15:16.247539 systemd[1861]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:15:16.260190 systemd[1861]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:15:16.260246 systemd[1861]: Reached target sockets.target - Sockets. Jun 20 19:15:16.260284 systemd[1861]: Reached target basic.target - Basic System. Jun 20 19:15:16.260355 systemd[1861]: Reached target default.target - Main User Target. Jun 20 19:15:16.260379 systemd[1861]: Startup finished in 211ms. Jun 20 19:15:16.260910 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:15:16.266055 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:15:16.268740 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.276516Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.276870Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.277386Z INFO Daemon Daemon Python: 3.11.12 Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.277881Z INFO Daemon Daemon Run daemon Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.278150Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.278743Z INFO Daemon Daemon Using waagent for provisioning Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.278921Z INFO Daemon Daemon Activate resource disk Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.279142Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.282391Z INFO Daemon Daemon Found device: None Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.282764Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.283050Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.283847Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:15:16.289283 waagent[1823]: 2025-06-20T19:15:16.284047Z INFO Daemon Daemon Running default provisioning handler Jun 20 19:15:16.295175 waagent[1823]: 2025-06-20T19:15:16.294261Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 19:15:16.295819 waagent[1823]: 2025-06-20T19:15:16.295785Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 19:15:16.296523 waagent[1823]: 2025-06-20T19:15:16.296499Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 19:15:16.297112 waagent[1823]: 2025-06-20T19:15:16.297093Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 19:15:16.343109 waagent[1823]: 2025-06-20T19:15:16.342976Z INFO Daemon Daemon Successfully mounted dvd Jun 20 19:15:16.376315 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 19:15:16.378770 waagent[1823]: 2025-06-20T19:15:16.378707Z INFO Daemon Daemon Detect protocol endpoint Jun 20 19:15:16.381456 waagent[1823]: 2025-06-20T19:15:16.380046Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:15:16.381456 waagent[1823]: 2025-06-20T19:15:16.380286Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 19:15:16.381456 waagent[1823]: 2025-06-20T19:15:16.380617Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 19:15:16.381456 waagent[1823]: 2025-06-20T19:15:16.380785Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 19:15:16.381456 waagent[1823]: 2025-06-20T19:15:16.381003Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 19:15:16.393340 waagent[1823]: 2025-06-20T19:15:16.393305Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 19:15:16.395081 waagent[1823]: 2025-06-20T19:15:16.394610Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 19:15:16.395081 waagent[1823]: 2025-06-20T19:15:16.394836Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 19:15:16.453010 waagent[1823]: 2025-06-20T19:15:16.452910Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 19:15:16.455472 waagent[1823]: 2025-06-20T19:15:16.454872Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 19:15:16.460277 waagent[1823]: 2025-06-20T19:15:16.460227Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:15:16.489751 waagent[1823]: 2025-06-20T19:15:16.489712Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 19:15:16.492731 waagent[1823]: 2025-06-20T19:15:16.490362Z INFO Daemon Jun 20 19:15:16.492731 waagent[1823]: 2025-06-20T19:15:16.490574Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f2051128-be1a-4def-8e30-d19e15da30bb eTag: 8692951941908441694 source: Fabric] Jun 20 19:15:16.492731 waagent[1823]: 2025-06-20T19:15:16.491394Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 19:15:16.492731 waagent[1823]: 2025-06-20T19:15:16.491790Z INFO Daemon Jun 20 19:15:16.492731 waagent[1823]: 2025-06-20T19:15:16.491946Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:15:16.498771 waagent[1823]: 2025-06-20T19:15:16.495922Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 19:15:16.577003 waagent[1823]: 2025-06-20T19:15:16.576936Z INFO Daemon Downloaded certificate {'thumbprint': '6A409422C488D923CBEF0CE5FAAB1F1F99EFF47C', 'hasPrivateKey': True} Jun 20 19:15:16.579611 waagent[1823]: 2025-06-20T19:15:16.579572Z INFO Daemon Fetch goal state completed Jun 20 19:15:16.588262 waagent[1823]: 2025-06-20T19:15:16.588204Z INFO Daemon Daemon Starting provisioning Jun 20 19:15:16.589256 waagent[1823]: 2025-06-20T19:15:16.589056Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 19:15:16.589814 waagent[1823]: 2025-06-20T19:15:16.589301Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-a13bac16ef] Jun 20 19:15:16.606885 waagent[1823]: 2025-06-20T19:15:16.606840Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-a13bac16ef] Jun 20 19:15:16.613691 waagent[1823]: 2025-06-20T19:15:16.607165Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 19:15:16.613691 waagent[1823]: 2025-06-20T19:15:16.607457Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 19:15:16.616005 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:15:16.616012 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:15:16.616045 systemd-networkd[1368]: eth0: DHCP lease lost Jun 20 19:15:16.617072 waagent[1823]: 2025-06-20T19:15:16.616976Z INFO Daemon Daemon Create user account if not exists Jun 20 19:15:16.618223 waagent[1823]: 2025-06-20T19:15:16.617253Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 19:15:16.618223 waagent[1823]: 2025-06-20T19:15:16.617551Z INFO Daemon Daemon Configure sudoer Jun 20 19:15:16.622666 waagent[1823]: 2025-06-20T19:15:16.622618Z INFO Daemon Daemon Configure sshd Jun 20 19:15:16.627731 waagent[1823]: 2025-06-20T19:15:16.627691Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 19:15:16.632162 waagent[1823]: 2025-06-20T19:15:16.631675Z INFO Daemon Daemon Deploy ssh public key. Jun 20 19:15:16.642493 systemd-networkd[1368]: eth0: DHCPv4 address 10.200.4.21/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:15:17.722158 waagent[1823]: 2025-06-20T19:15:17.722111Z INFO Daemon Daemon Provisioning complete Jun 20 19:15:17.733101 waagent[1823]: 2025-06-20T19:15:17.733067Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 19:15:17.734856 waagent[1823]: 2025-06-20T19:15:17.733305Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 19:15:17.734856 waagent[1823]: 2025-06-20T19:15:17.733418Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 19:15:17.838163 waagent[1913]: 2025-06-20T19:15:17.838078Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 19:15:17.838525 waagent[1913]: 2025-06-20T19:15:17.838212Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 19:15:17.838525 waagent[1913]: 2025-06-20T19:15:17.838253Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 19:15:17.838525 waagent[1913]: 2025-06-20T19:15:17.838293Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 20 19:15:17.879819 waagent[1913]: 2025-06-20T19:15:17.879742Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 19:15:17.879985 waagent[1913]: 2025-06-20T19:15:17.879959Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:15:17.880046 waagent[1913]: 2025-06-20T19:15:17.880017Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:15:17.891307 waagent[1913]: 2025-06-20T19:15:17.891247Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:15:17.902211 waagent[1913]: 2025-06-20T19:15:17.902171Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 19:15:17.902654 waagent[1913]: 2025-06-20T19:15:17.902620Z INFO ExtHandler Jun 20 19:15:17.902705 waagent[1913]: 2025-06-20T19:15:17.902682Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8a05b915-3e9a-439f-b414-a2f6117c4f58 eTag: 8692951941908441694 source: Fabric] Jun 20 19:15:17.902917 waagent[1913]: 2025-06-20T19:15:17.902893Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 19:15:17.903259 waagent[1913]: 2025-06-20T19:15:17.903236Z INFO ExtHandler Jun 20 19:15:17.903295 waagent[1913]: 2025-06-20T19:15:17.903277Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:15:17.906223 waagent[1913]: 2025-06-20T19:15:17.906186Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 19:15:17.996730 waagent[1913]: 2025-06-20T19:15:17.996611Z INFO ExtHandler Downloaded certificate {'thumbprint': '6A409422C488D923CBEF0CE5FAAB1F1F99EFF47C', 'hasPrivateKey': True} Jun 20 19:15:17.997082 waagent[1913]: 2025-06-20T19:15:17.997052Z INFO ExtHandler Fetch goal state completed Jun 20 19:15:18.017969 waagent[1913]: 2025-06-20T19:15:18.017903Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 19:15:18.022769 waagent[1913]: 2025-06-20T19:15:18.022716Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1913 Jun 20 19:15:18.022893 waagent[1913]: 2025-06-20T19:15:18.022871Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 19:15:18.023152 waagent[1913]: 2025-06-20T19:15:18.023131Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 19:15:18.024228 waagent[1913]: 2025-06-20T19:15:18.024192Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 19:15:18.024568 waagent[1913]: 2025-06-20T19:15:18.024538Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 19:15:18.024684 waagent[1913]: 2025-06-20T19:15:18.024660Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 19:15:18.025090 waagent[1913]: 2025-06-20T19:15:18.025062Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 19:15:18.043236 waagent[1913]: 2025-06-20T19:15:18.043200Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 19:15:18.043402 waagent[1913]: 2025-06-20T19:15:18.043382Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 19:15:18.048885 waagent[1913]: 2025-06-20T19:15:18.048851Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 19:15:18.054269 systemd[1]: Reload requested from client PID 1928 ('systemctl') (unit waagent.service)... Jun 20 19:15:18.054282 systemd[1]: Reloading... Jun 20 19:15:18.124461 zram_generator::config[1966]: No configuration found. Jun 20 19:15:18.210005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:15:18.308144 systemd[1]: Reloading finished in 253 ms. Jun 20 19:15:18.324787 waagent[1913]: 2025-06-20T19:15:18.324219Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 19:15:18.324787 waagent[1913]: 2025-06-20T19:15:18.324385Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 19:15:18.529670 waagent[1913]: 2025-06-20T19:15:18.529594Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 19:15:18.529946 waagent[1913]: 2025-06-20T19:15:18.529919Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 19:15:18.530746 waagent[1913]: 2025-06-20T19:15:18.530603Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 19:15:18.531060 waagent[1913]: 2025-06-20T19:15:18.531031Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:15:18.531096 waagent[1913]: 2025-06-20T19:15:18.531066Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 19:15:18.531355 waagent[1913]: 2025-06-20T19:15:18.531318Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 19:15:18.531476 waagent[1913]: 2025-06-20T19:15:18.531420Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:15:18.531606 waagent[1913]: 2025-06-20T19:15:18.531585Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 19:15:18.531771 waagent[1913]: 2025-06-20T19:15:18.531744Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 19:15:18.532359 waagent[1913]: 2025-06-20T19:15:18.532295Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:15:18.532594 waagent[1913]: 2025-06-20T19:15:18.532571Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 19:15:18.532594 waagent[1913]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 19:15:18.532594 waagent[1913]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 19:15:18.532594 waagent[1913]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 19:15:18.532594 waagent[1913]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:15:18.532594 waagent[1913]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:15:18.532594 waagent[1913]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:15:18.532855 waagent[1913]: 2025-06-20T19:15:18.532658Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 19:15:18.532890 waagent[1913]: 2025-06-20T19:15:18.532862Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 19:15:18.533088 waagent[1913]: 2025-06-20T19:15:18.533056Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:15:18.533343 waagent[1913]: 2025-06-20T19:15:18.533313Z INFO EnvHandler ExtHandler Configure routes Jun 20 19:15:18.533871 waagent[1913]: 2025-06-20T19:15:18.533788Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 19:15:18.534009 waagent[1913]: 2025-06-20T19:15:18.533977Z INFO EnvHandler ExtHandler Gateway:None Jun 20 19:15:18.534055 waagent[1913]: 2025-06-20T19:15:18.534038Z INFO EnvHandler ExtHandler Routes:None Jun 20 19:15:18.549668 waagent[1913]: 2025-06-20T19:15:18.549628Z INFO ExtHandler ExtHandler Jun 20 19:15:18.549766 waagent[1913]: 2025-06-20T19:15:18.549697Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2cd9b4df-b66d-4f63-a7f8-d63d809cc79c correlation 0111944b-55c5-437c-843c-7958f05a5f91 created: 2025-06-20T19:14:24.791466Z] Jun 20 19:15:18.550025 waagent[1913]: 2025-06-20T19:15:18.550000Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 19:15:18.550422 waagent[1913]: 2025-06-20T19:15:18.550397Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 20 19:15:18.573795 waagent[1913]: 2025-06-20T19:15:18.573694Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 19:15:18.573795 waagent[1913]: Executing ['ip', '-a', '-o', 'link']: Jun 20 19:15:18.573795 waagent[1913]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 19:15:18.573795 waagent[1913]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:f2:2a:97 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 20 19:15:18.573795 waagent[1913]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:f2:2a:97 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jun 20 19:15:18.573795 waagent[1913]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 19:15:18.573795 waagent[1913]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 19:15:18.573795 waagent[1913]: 2: eth0 inet 10.200.4.21/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 19:15:18.573795 waagent[1913]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 19:15:18.573795 waagent[1913]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 19:15:18.573795 waagent[1913]: 2: eth0 inet6 fe80::6245:bdff:fef2:2a97/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:15:18.573795 waagent[1913]: 3: enP30832s1 inet6 fe80::6245:bdff:fef2:2a97/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:15:18.592586 waagent[1913]: 2025-06-20T19:15:18.592496Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 19:15:18.592586 waagent[1913]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 19:15:18.593794 waagent[1913]: 2025-06-20T19:15:18.593694Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 80373587-21C4-4687-888E-D24B9194E1B2;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 19:15:18.625814 waagent[1913]: 2025-06-20T19:15:18.625762Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 19:15:18.625814 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:15:18.625814 waagent[1913]: pkts bytes target prot opt in out source destination Jun 20 19:15:18.625814 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:15:18.625814 waagent[1913]: pkts bytes target prot opt in out source destination Jun 20 19:15:18.625814 waagent[1913]: Chain OUTPUT (policy ACCEPT 3 packets, 356 bytes) Jun 20 19:15:18.625814 waagent[1913]: pkts bytes target prot opt in out source destination Jun 20 19:15:18.625814 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:15:18.625814 waagent[1913]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:15:18.625814 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:15:18.629086 waagent[1913]: 2025-06-20T19:15:18.629040Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 19:15:18.629086 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:15:18.629086 waagent[1913]: pkts bytes target prot opt in out source destination Jun 20 19:15:18.629086 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:15:18.629086 waagent[1913]: pkts bytes target prot opt in out source destination Jun 20 19:15:18.629086 waagent[1913]: Chain OUTPUT (policy ACCEPT 3 packets, 356 bytes) Jun 20 19:15:18.629086 waagent[1913]: pkts bytes target prot opt in out source destination Jun 20 19:15:18.629086 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:15:18.629086 waagent[1913]: 6 460 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:15:18.629086 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:15:26.168166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:15:26.169726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:26.577631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:26.589802 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:26.628838 kubelet[2064]: E0620 19:15:26.628784 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:26.632249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:26.632393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:26.632721 systemd[1]: kubelet.service: Consumed 136ms CPU time, 111.2M memory peak. Jun 20 19:15:36.668214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:15:36.669734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:37.122639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:37.130701 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:37.161970 kubelet[2079]: E0620 19:15:37.161923 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:37.164037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:37.164189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:37.164516 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.5M memory peak. Jun 20 19:15:37.878287 chronyd[1733]: Selected source PHC0 Jun 20 19:15:41.627081 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:15:41.628220 systemd[1]: Started sshd@0-10.200.4.21:22-10.200.16.10:52262.service - OpenSSH per-connection server daemon (10.200.16.10:52262). Jun 20 19:15:42.311159 sshd[2087]: Accepted publickey for core from 10.200.16.10 port 52262 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:42.312331 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:42.316888 systemd-logind[1706]: New session 3 of user core. Jun 20 19:15:42.322571 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:15:42.836498 systemd[1]: Started sshd@1-10.200.4.21:22-10.200.16.10:52278.service - OpenSSH per-connection server daemon (10.200.16.10:52278). Jun 20 19:15:43.424072 sshd[2092]: Accepted publickey for core from 10.200.16.10 port 52278 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:43.425208 sshd-session[2092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:43.429765 systemd-logind[1706]: New session 4 of user core. Jun 20 19:15:43.438587 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:15:43.847120 sshd[2094]: Connection closed by 10.200.16.10 port 52278 Jun 20 19:15:43.847947 sshd-session[2092]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:43.850717 systemd[1]: sshd@1-10.200.4.21:22-10.200.16.10:52278.service: Deactivated successfully. Jun 20 19:15:43.852387 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:15:43.853761 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:15:43.855032 systemd-logind[1706]: Removed session 4. Jun 20 19:15:43.951201 systemd[1]: Started sshd@2-10.200.4.21:22-10.200.16.10:52284.service - OpenSSH per-connection server daemon (10.200.16.10:52284). Jun 20 19:15:44.541065 sshd[2100]: Accepted publickey for core from 10.200.16.10 port 52284 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:44.542219 sshd-session[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:44.546515 systemd-logind[1706]: New session 5 of user core. Jun 20 19:15:44.552595 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:15:44.954558 sshd[2102]: Connection closed by 10.200.16.10 port 52284 Jun 20 19:15:44.955175 sshd-session[2100]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:44.958588 systemd[1]: sshd@2-10.200.4.21:22-10.200.16.10:52284.service: Deactivated successfully. Jun 20 19:15:44.960118 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:15:44.960808 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:15:44.962067 systemd-logind[1706]: Removed session 5. Jun 20 19:15:45.063278 systemd[1]: Started sshd@3-10.200.4.21:22-10.200.16.10:52292.service - OpenSSH per-connection server daemon (10.200.16.10:52292). Jun 20 19:15:45.690843 sshd[2108]: Accepted publickey for core from 10.200.16.10 port 52292 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:45.692055 sshd-session[2108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:45.696449 systemd-logind[1706]: New session 6 of user core. Jun 20 19:15:45.702622 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:15:46.128574 sshd[2110]: Connection closed by 10.200.16.10 port 52292 Jun 20 19:15:46.130609 sshd-session[2108]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:46.133031 systemd[1]: sshd@3-10.200.4.21:22-10.200.16.10:52292.service: Deactivated successfully. Jun 20 19:15:46.134857 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:15:46.136735 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:15:46.137618 systemd-logind[1706]: Removed session 6. Jun 20 19:15:46.234257 systemd[1]: Started sshd@4-10.200.4.21:22-10.200.16.10:52308.service - OpenSSH per-connection server daemon (10.200.16.10:52308). Jun 20 19:15:46.961763 sshd[2116]: Accepted publickey for core from 10.200.16.10 port 52308 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:46.962979 sshd-session[2116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:46.967425 systemd-logind[1706]: New session 7 of user core. Jun 20 19:15:46.972607 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:15:47.168141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:15:47.169921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:47.602892 sudo[2122]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:15:47.603119 sudo[2122]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:47.614624 sudo[2122]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:47.710775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:47.714534 sshd[2118]: Connection closed by 10.200.16.10 port 52308 Jun 20 19:15:47.714200 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:47.715261 sshd-session[2116]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:47.720785 systemd[1]: sshd@4-10.200.4.21:22-10.200.16.10:52308.service: Deactivated successfully. Jun 20 19:15:47.721315 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:15:47.723781 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:15:47.726982 systemd-logind[1706]: Removed session 7. Jun 20 19:15:47.755035 kubelet[2129]: E0620 19:15:47.754985 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:47.756802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:47.756937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:47.757265 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108.7M memory peak. Jun 20 19:15:47.822411 systemd[1]: Started sshd@5-10.200.4.21:22-10.200.16.10:52310.service - OpenSSH per-connection server daemon (10.200.16.10:52310). Jun 20 19:15:48.442286 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 52310 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:48.443517 sshd-session[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:48.447890 systemd-logind[1706]: New session 8 of user core. Jun 20 19:15:48.450614 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:15:48.767689 sudo[2145]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:15:48.768076 sudo[2145]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:48.775718 sudo[2145]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:48.779878 sudo[2144]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:15:48.780099 sudo[2144]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:48.788120 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:15:48.821812 augenrules[2167]: No rules Jun 20 19:15:48.822881 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:15:48.823114 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:15:48.824366 sudo[2144]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:48.918984 sshd[2143]: Connection closed by 10.200.16.10 port 52310 Jun 20 19:15:48.919610 sshd-session[2141]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:48.922531 systemd[1]: sshd@5-10.200.4.21:22-10.200.16.10:52310.service: Deactivated successfully. Jun 20 19:15:48.924145 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:15:48.924842 systemd-logind[1706]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:15:48.926412 systemd-logind[1706]: Removed session 8. Jun 20 19:15:49.025365 systemd[1]: Started sshd@6-10.200.4.21:22-10.200.16.10:39832.service - OpenSSH per-connection server daemon (10.200.16.10:39832). Jun 20 19:15:49.619585 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 39832 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:49.620717 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:49.625166 systemd-logind[1706]: New session 9 of user core. Jun 20 19:15:49.635609 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:15:49.979727 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:15:49.979954 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:51.448285 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:15:51.460811 (dockerd)[2197]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:15:52.197521 dockerd[2197]: time="2025-06-20T19:15:52.197426126Z" level=info msg="Starting up" Jun 20 19:15:52.198882 dockerd[2197]: time="2025-06-20T19:15:52.198770806Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:15:52.285121 dockerd[2197]: time="2025-06-20T19:15:52.285070596Z" level=info msg="Loading containers: start." Jun 20 19:15:52.327456 kernel: Initializing XFRM netlink socket Jun 20 19:15:52.621368 systemd-networkd[1368]: docker0: Link UP Jun 20 19:15:52.633850 dockerd[2197]: time="2025-06-20T19:15:52.633807418Z" level=info msg="Loading containers: done." Jun 20 19:15:52.664408 dockerd[2197]: time="2025-06-20T19:15:52.664310618Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:15:52.664590 dockerd[2197]: time="2025-06-20T19:15:52.664479569Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:15:52.664590 dockerd[2197]: time="2025-06-20T19:15:52.664583344Z" level=info msg="Initializing buildkit" Jun 20 19:15:52.709132 dockerd[2197]: time="2025-06-20T19:15:52.709077488Z" level=info msg="Completed buildkit initialization" Jun 20 19:15:52.715972 dockerd[2197]: time="2025-06-20T19:15:52.715919028Z" level=info msg="Daemon has completed initialization" Jun 20 19:15:52.715972 dockerd[2197]: time="2025-06-20T19:15:52.715998945Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:15:52.716173 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:15:53.658541 containerd[1730]: time="2025-06-20T19:15:53.658500179Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 19:15:54.455828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2481967914.mount: Deactivated successfully. Jun 20 19:15:55.522601 containerd[1730]: time="2025-06-20T19:15:55.522546984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:55.524893 containerd[1730]: time="2025-06-20T19:15:55.524717564Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jun 20 19:15:55.527748 containerd[1730]: time="2025-06-20T19:15:55.527722905Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:55.533797 containerd[1730]: time="2025-06-20T19:15:55.533758792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:55.534403 containerd[1730]: time="2025-06-20T19:15:55.534376498Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.875837345s" Jun 20 19:15:55.534495 containerd[1730]: time="2025-06-20T19:15:55.534482919Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 20 19:15:55.535131 containerd[1730]: time="2025-06-20T19:15:55.535105214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 19:15:56.816601 containerd[1730]: time="2025-06-20T19:15:56.816549010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:56.818891 containerd[1730]: time="2025-06-20T19:15:56.818862110Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jun 20 19:15:56.821910 containerd[1730]: time="2025-06-20T19:15:56.821887561Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:56.825607 containerd[1730]: time="2025-06-20T19:15:56.825578705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:56.826168 containerd[1730]: time="2025-06-20T19:15:56.826146349Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.291012764s" Jun 20 19:15:56.826238 containerd[1730]: time="2025-06-20T19:15:56.826226855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 20 19:15:56.826880 containerd[1730]: time="2025-06-20T19:15:56.826856424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 19:15:57.918374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:15:57.921279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:57.958461 containerd[1730]: time="2025-06-20T19:15:57.957741782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:57.961096 containerd[1730]: time="2025-06-20T19:15:57.961067242Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jun 20 19:15:58.090206 containerd[1730]: time="2025-06-20T19:15:58.090153002Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:58.386710 containerd[1730]: time="2025-06-20T19:15:58.386579415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:58.389979 containerd[1730]: time="2025-06-20T19:15:58.389948541Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.563060836s" Jun 20 19:15:58.390148 containerd[1730]: time="2025-06-20T19:15:58.390095692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 20 19:15:58.392444 containerd[1730]: time="2025-06-20T19:15:58.392243833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 19:15:58.416733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:58.420807 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:58.457236 kubelet[2465]: E0620 19:15:58.457175 2465 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:58.459049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:58.459191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:58.459587 systemd[1]: kubelet.service: Consumed 134ms CPU time, 109M memory peak. Jun 20 19:15:58.833566 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 20 19:15:59.412631 update_engine[1709]: I20250620 19:15:59.412554 1709 update_attempter.cc:509] Updating boot flags... Jun 20 19:15:59.806495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104256869.mount: Deactivated successfully. Jun 20 19:16:00.162344 containerd[1730]: time="2025-06-20T19:16:00.162225365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:00.164726 containerd[1730]: time="2025-06-20T19:16:00.164685755Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jun 20 19:16:00.167611 containerd[1730]: time="2025-06-20T19:16:00.167569077Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:00.170889 containerd[1730]: time="2025-06-20T19:16:00.170848484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:00.171373 containerd[1730]: time="2025-06-20T19:16:00.171154515Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.778877107s" Jun 20 19:16:00.171373 containerd[1730]: time="2025-06-20T19:16:00.171186379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 20 19:16:00.171691 containerd[1730]: time="2025-06-20T19:16:00.171666330Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:16:00.855065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707516978.mount: Deactivated successfully. Jun 20 19:16:01.831161 containerd[1730]: time="2025-06-20T19:16:01.831106976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:01.836373 containerd[1730]: time="2025-06-20T19:16:01.836330943Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 20 19:16:01.839446 containerd[1730]: time="2025-06-20T19:16:01.839394869Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:01.844180 containerd[1730]: time="2025-06-20T19:16:01.844121306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:01.845271 containerd[1730]: time="2025-06-20T19:16:01.844834676Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.673139949s" Jun 20 19:16:01.845271 containerd[1730]: time="2025-06-20T19:16:01.844869056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:16:01.845526 containerd[1730]: time="2025-06-20T19:16:01.845501717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:16:02.440303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625559381.mount: Deactivated successfully. Jun 20 19:16:02.468218 containerd[1730]: time="2025-06-20T19:16:02.468167164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:16:02.470618 containerd[1730]: time="2025-06-20T19:16:02.470480289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:16:02.473210 containerd[1730]: time="2025-06-20T19:16:02.473182669Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:16:02.476793 containerd[1730]: time="2025-06-20T19:16:02.476746760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:16:02.477262 containerd[1730]: time="2025-06-20T19:16:02.477127017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 631.596156ms" Jun 20 19:16:02.477262 containerd[1730]: time="2025-06-20T19:16:02.477156168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:16:02.477740 containerd[1730]: time="2025-06-20T19:16:02.477698841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 19:16:03.147992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608455622.mount: Deactivated successfully. Jun 20 19:16:04.901479 containerd[1730]: time="2025-06-20T19:16:04.901414659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:04.904059 containerd[1730]: time="2025-06-20T19:16:04.904020708Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jun 20 19:16:04.906551 containerd[1730]: time="2025-06-20T19:16:04.906508857Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:04.910371 containerd[1730]: time="2025-06-20T19:16:04.910327149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:04.911156 containerd[1730]: time="2025-06-20T19:16:04.911012516Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.43325516s" Jun 20 19:16:04.911156 containerd[1730]: time="2025-06-20T19:16:04.911047675Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 20 19:16:07.425726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:07.426180 systemd[1]: kubelet.service: Consumed 134ms CPU time, 109M memory peak. Jun 20 19:16:07.428208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:07.450512 systemd[1]: Reload requested from client PID 2640 ('systemctl') (unit session-9.scope)... Jun 20 19:16:07.450525 systemd[1]: Reloading... Jun 20 19:16:07.548477 zram_generator::config[2686]: No configuration found. Jun 20 19:16:07.723589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:16:07.821330 systemd[1]: Reloading finished in 370 ms. Jun 20 19:16:07.855022 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:16:07.855118 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:16:07.855400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:07.855473 systemd[1]: kubelet.service: Consumed 92ms CPU time, 92.8M memory peak. Jun 20 19:16:07.857046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:08.427751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:08.436733 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:16:08.474072 kubelet[2753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:16:08.474072 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:16:08.474072 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:16:08.475460 kubelet[2753]: I0620 19:16:08.474447 2753 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:16:08.888381 kubelet[2753]: I0620 19:16:08.888267 2753 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:16:08.888381 kubelet[2753]: I0620 19:16:08.888294 2753 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:16:08.888835 kubelet[2753]: I0620 19:16:08.888689 2753 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:16:08.916529 kubelet[2753]: E0620 19:16:08.916471 2753 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:16:08.917412 kubelet[2753]: I0620 19:16:08.917304 2753 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:16:08.926040 kubelet[2753]: I0620 19:16:08.926016 2753 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:16:08.930457 kubelet[2753]: I0620 19:16:08.930059 2753 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:16:08.930457 kubelet[2753]: I0620 19:16:08.930163 2753 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:16:08.930457 kubelet[2753]: I0620 19:16:08.930268 2753 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:16:08.930749 kubelet[2753]: I0620 19:16:08.930289 2753 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-a13bac16ef","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:16:08.930895 kubelet[2753]: I0620 19:16:08.930887 2753 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:16:08.930937 kubelet[2753]: I0620 19:16:08.930933 2753 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:16:08.931079 kubelet[2753]: I0620 19:16:08.931074 2753 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:16:08.934789 kubelet[2753]: I0620 19:16:08.934761 2753 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:16:08.934859 kubelet[2753]: I0620 19:16:08.934798 2753 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:16:08.934859 kubelet[2753]: I0620 19:16:08.934839 2753 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:16:08.934859 kubelet[2753]: I0620 19:16:08.934858 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:16:08.940538 kubelet[2753]: W0620 19:16:08.940097 2753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-a13bac16ef&limit=500&resourceVersion=0": dial tcp 10.200.4.21:6443: connect: connection refused Jun 20 19:16:08.940538 kubelet[2753]: E0620 19:16:08.940175 2753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-a13bac16ef&limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:16:08.940538 kubelet[2753]: I0620 19:16:08.940248 2753 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:16:08.940670 kubelet[2753]: I0620 19:16:08.940665 2753 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:16:08.940728 kubelet[2753]: W0620 19:16:08.940718 2753 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:16:08.942700 kubelet[2753]: I0620 19:16:08.942672 2753 server.go:1274] "Started kubelet" Jun 20 19:16:08.944464 kubelet[2753]: W0620 19:16:08.943428 2753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.21:6443: connect: connection refused Jun 20 19:16:08.944464 kubelet[2753]: E0620 19:16:08.943493 2753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:16:08.944464 kubelet[2753]: I0620 19:16:08.943711 2753 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:16:08.944661 kubelet[2753]: I0620 19:16:08.944646 2753 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:16:08.944973 kubelet[2753]: I0620 19:16:08.944961 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:16:08.946878 kubelet[2753]: I0620 19:16:08.946859 2753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:16:08.947070 kubelet[2753]: I0620 19:16:08.947041 2753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:16:08.947242 kubelet[2753]: I0620 19:16:08.947229 2753 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:16:08.947478 kubelet[2753]: E0620 19:16:08.947463 2753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a13bac16ef\" not found" Jun 20 19:16:08.947516 kubelet[2753]: I0620 19:16:08.947508 2753 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:16:08.947692 kubelet[2753]: I0620 19:16:08.947681 2753 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:16:08.947737 kubelet[2753]: I0620 19:16:08.947729 2753 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:16:08.948064 kubelet[2753]: W0620 19:16:08.948028 2753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.21:6443: connect: connection refused Jun 20 19:16:08.948108 kubelet[2753]: E0620 19:16:08.948075 2753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:16:08.962386 kubelet[2753]: I0620 19:16:08.962361 2753 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:16:08.962484 kubelet[2753]: I0620 19:16:08.962468 2753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:16:08.963248 kubelet[2753]: E0620 19:16:08.963224 2753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-a13bac16ef?timeout=10s\": dial tcp 10.200.4.21:6443: connect: connection refused" interval="200ms" Jun 20 19:16:08.964734 kubelet[2753]: E0620 19:16:08.963614 2753 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-a13bac16ef.184ad63e75fc6d1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-a13bac16ef,UID:ci-4344.1.0-a-a13bac16ef,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-a13bac16ef,},FirstTimestamp:2025-06-20 19:16:08.942652699 +0000 UTC m=+0.502433087,LastTimestamp:2025-06-20 19:16:08.942652699 +0000 UTC m=+0.502433087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-a13bac16ef,}" Jun 20 19:16:08.964912 kubelet[2753]: I0620 19:16:08.964897 2753 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:16:08.969159 kubelet[2753]: E0620 19:16:08.969124 2753 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:16:08.977129 kubelet[2753]: I0620 19:16:08.977106 2753 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:16:08.977129 kubelet[2753]: I0620 19:16:08.977119 2753 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:16:08.977129 kubelet[2753]: I0620 19:16:08.977134 2753 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:16:08.981867 kubelet[2753]: I0620 19:16:08.981851 2753 policy_none.go:49] "None policy: Start" Jun 20 19:16:08.982265 kubelet[2753]: I0620 19:16:08.982251 2753 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:16:08.982328 kubelet[2753]: I0620 19:16:08.982268 2753 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:16:08.991737 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:16:09.002589 kubelet[2753]: I0620 19:16:09.002564 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:16:09.004509 kubelet[2753]: I0620 19:16:09.004335 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:16:09.004509 kubelet[2753]: I0620 19:16:09.004358 2753 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:16:09.004509 kubelet[2753]: I0620 19:16:09.004381 2753 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:16:09.004509 kubelet[2753]: E0620 19:16:09.004416 2753 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:16:09.008317 kubelet[2753]: W0620 19:16:09.008219 2753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.21:6443: connect: connection refused Jun 20 19:16:09.008317 kubelet[2753]: E0620 19:16:09.008255 2753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:16:09.010068 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:16:09.012666 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:16:09.022947 kubelet[2753]: I0620 19:16:09.022925 2753 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:16:09.023100 kubelet[2753]: I0620 19:16:09.023088 2753 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:16:09.023135 kubelet[2753]: I0620 19:16:09.023101 2753 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:16:09.024214 kubelet[2753]: I0620 19:16:09.023663 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:16:09.025217 kubelet[2753]: E0620 19:16:09.025197 2753 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-a13bac16ef\" not found" Jun 20 19:16:09.114419 systemd[1]: Created slice kubepods-burstable-podebba66f99866046963b63963adfe95ac.slice - libcontainer container kubepods-burstable-podebba66f99866046963b63963adfe95ac.slice. Jun 20 19:16:09.124402 kubelet[2753]: I0620 19:16:09.124377 2753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.124845 kubelet[2753]: E0620 19:16:09.124744 2753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.21:6443/api/v1/nodes\": dial tcp 10.200.4.21:6443: connect: connection refused" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.126253 systemd[1]: Created slice kubepods-burstable-pod78835c9fc1b7b3cd7581c20e4ced4600.slice - libcontainer container kubepods-burstable-pod78835c9fc1b7b3cd7581c20e4ced4600.slice. Jun 20 19:16:09.135794 systemd[1]: Created slice kubepods-burstable-pod80d23c07b1c7bff131ad3fb5a5e01f66.slice - libcontainer container kubepods-burstable-pod80d23c07b1c7bff131ad3fb5a5e01f66.slice. Jun 20 19:16:09.164178 kubelet[2753]: E0620 19:16:09.164064 2753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-a13bac16ef?timeout=10s\": dial tcp 10.200.4.21:6443: connect: connection refused" interval="400ms" Jun 20 19:16:09.249508 kubelet[2753]: I0620 19:16:09.249459 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebba66f99866046963b63963adfe95ac-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" (UID: \"ebba66f99866046963b63963adfe95ac\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.249508 kubelet[2753]: I0620 19:16:09.249510 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.249707 kubelet[2753]: I0620 19:16:09.249531 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.249707 kubelet[2753]: I0620 19:16:09.249552 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.249707 kubelet[2753]: I0620 19:16:09.249569 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.249707 kubelet[2753]: I0620 19:16:09.249589 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78835c9fc1b7b3cd7581c20e4ced4600-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-a13bac16ef\" (UID: \"78835c9fc1b7b3cd7581c20e4ced4600\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.249707 kubelet[2753]: I0620 19:16:09.249605 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebba66f99866046963b63963adfe95ac-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" (UID: \"ebba66f99866046963b63963adfe95ac\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.249813 kubelet[2753]: I0620 19:16:09.249621 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.249813 kubelet[2753]: I0620 19:16:09.249636 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebba66f99866046963b63963adfe95ac-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" (UID: \"ebba66f99866046963b63963adfe95ac\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.327214 kubelet[2753]: I0620 19:16:09.327170 2753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.327592 kubelet[2753]: E0620 19:16:09.327572 2753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.21:6443/api/v1/nodes\": dial tcp 10.200.4.21:6443: connect: connection refused" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.425144 containerd[1730]: time="2025-06-20T19:16:09.425106759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-a13bac16ef,Uid:ebba66f99866046963b63963adfe95ac,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:09.428662 containerd[1730]: time="2025-06-20T19:16:09.428625391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-a13bac16ef,Uid:78835c9fc1b7b3cd7581c20e4ced4600,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:09.438279 containerd[1730]: time="2025-06-20T19:16:09.438249777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-a13bac16ef,Uid:80d23c07b1c7bff131ad3fb5a5e01f66,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:09.534220 containerd[1730]: time="2025-06-20T19:16:09.533505552Z" level=info msg="connecting to shim d23c7020cef04ca56bc9d1d4da2423d3b2477d8c301553fbd7e878b07e7ca5b1" address="unix:///run/containerd/s/377a79aca6f3db63ca8707349495844b4ef942de40c93c29e17947db6cc59342" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:09.547766 containerd[1730]: time="2025-06-20T19:16:09.547725346Z" level=info msg="connecting to shim 187703b19f9beff125633eff266bf8ba9bd61b40d8802119b7677d90a3315247" address="unix:///run/containerd/s/1729130589fa1dfcf5b8703336db3991b6ce20c212ac8ab6796c442e46eec7a5" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:09.561658 containerd[1730]: time="2025-06-20T19:16:09.561618696Z" level=info msg="connecting to shim f7370cb234f25507d582acf03c99cc42f793951f284e16ea6cd5122f2af58d97" address="unix:///run/containerd/s/abcc0ac1e696ec02a924f2fcd50cbcce7e28fa7b491787675a56072ffb7e2e64" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:09.565798 kubelet[2753]: E0620 19:16:09.564832 2753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-a13bac16ef?timeout=10s\": dial tcp 10.200.4.21:6443: connect: connection refused" interval="800ms" Jun 20 19:16:09.581607 systemd[1]: Started cri-containerd-d23c7020cef04ca56bc9d1d4da2423d3b2477d8c301553fbd7e878b07e7ca5b1.scope - libcontainer container d23c7020cef04ca56bc9d1d4da2423d3b2477d8c301553fbd7e878b07e7ca5b1. Jun 20 19:16:09.587586 systemd[1]: Started cri-containerd-187703b19f9beff125633eff266bf8ba9bd61b40d8802119b7677d90a3315247.scope - libcontainer container 187703b19f9beff125633eff266bf8ba9bd61b40d8802119b7677d90a3315247. Jun 20 19:16:09.597589 systemd[1]: Started cri-containerd-f7370cb234f25507d582acf03c99cc42f793951f284e16ea6cd5122f2af58d97.scope - libcontainer container f7370cb234f25507d582acf03c99cc42f793951f284e16ea6cd5122f2af58d97. Jun 20 19:16:09.672046 containerd[1730]: time="2025-06-20T19:16:09.672001408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-a13bac16ef,Uid:78835c9fc1b7b3cd7581c20e4ced4600,Namespace:kube-system,Attempt:0,} returns sandbox id \"187703b19f9beff125633eff266bf8ba9bd61b40d8802119b7677d90a3315247\"" Jun 20 19:16:09.675807 containerd[1730]: time="2025-06-20T19:16:09.675674147Z" level=info msg="CreateContainer within sandbox \"187703b19f9beff125633eff266bf8ba9bd61b40d8802119b7677d90a3315247\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:16:09.676181 containerd[1730]: time="2025-06-20T19:16:09.676160656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-a13bac16ef,Uid:ebba66f99866046963b63963adfe95ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"d23c7020cef04ca56bc9d1d4da2423d3b2477d8c301553fbd7e878b07e7ca5b1\"" Jun 20 19:16:09.683383 containerd[1730]: time="2025-06-20T19:16:09.683342687Z" level=info msg="CreateContainer within sandbox \"d23c7020cef04ca56bc9d1d4da2423d3b2477d8c301553fbd7e878b07e7ca5b1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:16:09.692796 containerd[1730]: time="2025-06-20T19:16:09.692767475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-a13bac16ef,Uid:80d23c07b1c7bff131ad3fb5a5e01f66,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7370cb234f25507d582acf03c99cc42f793951f284e16ea6cd5122f2af58d97\"" Jun 20 19:16:09.694400 containerd[1730]: time="2025-06-20T19:16:09.694371062Z" level=info msg="CreateContainer within sandbox \"f7370cb234f25507d582acf03c99cc42f793951f284e16ea6cd5122f2af58d97\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:16:09.717669 containerd[1730]: time="2025-06-20T19:16:09.717627966Z" level=info msg="Container ae7acba420557b2ce74925827d73112200579ce989386848ffe9e9e911fa70af: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:09.725646 containerd[1730]: time="2025-06-20T19:16:09.725614710Z" level=info msg="Container 97c6737bec8ca49749fc60b935e36e006a5d48e6fb6f0705edbb7ee7eba47c75: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:09.729976 kubelet[2753]: I0620 19:16:09.729955 2753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.730613 kubelet[2753]: E0620 19:16:09.730583 2753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.21:6443/api/v1/nodes\": dial tcp 10.200.4.21:6443: connect: connection refused" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:09.866999 kubelet[2753]: W0620 19:16:09.866936 2753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.21:6443: connect: connection refused Jun 20 19:16:09.866999 kubelet[2753]: E0620 19:16:09.867005 2753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:16:09.959048 containerd[1730]: time="2025-06-20T19:16:09.958910492Z" level=info msg="CreateContainer within sandbox \"187703b19f9beff125633eff266bf8ba9bd61b40d8802119b7677d90a3315247\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ae7acba420557b2ce74925827d73112200579ce989386848ffe9e9e911fa70af\"" Jun 20 19:16:09.959909 containerd[1730]: time="2025-06-20T19:16:09.959880997Z" level=info msg="StartContainer for \"ae7acba420557b2ce74925827d73112200579ce989386848ffe9e9e911fa70af\"" Jun 20 19:16:09.960896 containerd[1730]: time="2025-06-20T19:16:09.960856550Z" level=info msg="connecting to shim ae7acba420557b2ce74925827d73112200579ce989386848ffe9e9e911fa70af" address="unix:///run/containerd/s/1729130589fa1dfcf5b8703336db3991b6ce20c212ac8ab6796c442e46eec7a5" protocol=ttrpc version=3 Jun 20 19:16:09.976682 systemd[1]: Started cri-containerd-ae7acba420557b2ce74925827d73112200579ce989386848ffe9e9e911fa70af.scope - libcontainer container ae7acba420557b2ce74925827d73112200579ce989386848ffe9e9e911fa70af. Jun 20 19:16:09.985462 containerd[1730]: time="2025-06-20T19:16:09.984336165Z" level=info msg="CreateContainer within sandbox \"d23c7020cef04ca56bc9d1d4da2423d3b2477d8c301553fbd7e878b07e7ca5b1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"97c6737bec8ca49749fc60b935e36e006a5d48e6fb6f0705edbb7ee7eba47c75\"" Jun 20 19:16:09.985734 containerd[1730]: time="2025-06-20T19:16:09.985705273Z" level=info msg="Container b011534012c1754c1138a56e9e84e4b4f3d98c2845a9df6ce2f5a16ac7293c12: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:09.986219 containerd[1730]: time="2025-06-20T19:16:09.986153272Z" level=info msg="StartContainer for \"97c6737bec8ca49749fc60b935e36e006a5d48e6fb6f0705edbb7ee7eba47c75\"" Jun 20 19:16:09.989080 containerd[1730]: time="2025-06-20T19:16:09.989049706Z" level=info msg="connecting to shim 97c6737bec8ca49749fc60b935e36e006a5d48e6fb6f0705edbb7ee7eba47c75" address="unix:///run/containerd/s/377a79aca6f3db63ca8707349495844b4ef942de40c93c29e17947db6cc59342" protocol=ttrpc version=3 Jun 20 19:16:10.010596 systemd[1]: Started cri-containerd-97c6737bec8ca49749fc60b935e36e006a5d48e6fb6f0705edbb7ee7eba47c75.scope - libcontainer container 97c6737bec8ca49749fc60b935e36e006a5d48e6fb6f0705edbb7ee7eba47c75. Jun 20 19:16:10.141478 containerd[1730]: time="2025-06-20T19:16:10.141426697Z" level=info msg="StartContainer for \"ae7acba420557b2ce74925827d73112200579ce989386848ffe9e9e911fa70af\" returns successfully" Jun 20 19:16:10.142151 containerd[1730]: time="2025-06-20T19:16:10.142127523Z" level=info msg="StartContainer for \"97c6737bec8ca49749fc60b935e36e006a5d48e6fb6f0705edbb7ee7eba47c75\" returns successfully" Jun 20 19:16:10.151360 containerd[1730]: time="2025-06-20T19:16:10.151329347Z" level=info msg="CreateContainer within sandbox \"f7370cb234f25507d582acf03c99cc42f793951f284e16ea6cd5122f2af58d97\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b011534012c1754c1138a56e9e84e4b4f3d98c2845a9df6ce2f5a16ac7293c12\"" Jun 20 19:16:10.151936 containerd[1730]: time="2025-06-20T19:16:10.151915391Z" level=info msg="StartContainer for \"b011534012c1754c1138a56e9e84e4b4f3d98c2845a9df6ce2f5a16ac7293c12\"" Jun 20 19:16:10.152988 containerd[1730]: time="2025-06-20T19:16:10.152935346Z" level=info msg="connecting to shim b011534012c1754c1138a56e9e84e4b4f3d98c2845a9df6ce2f5a16ac7293c12" address="unix:///run/containerd/s/abcc0ac1e696ec02a924f2fcd50cbcce7e28fa7b491787675a56072ffb7e2e64" protocol=ttrpc version=3 Jun 20 19:16:10.181610 systemd[1]: Started cri-containerd-b011534012c1754c1138a56e9e84e4b4f3d98c2845a9df6ce2f5a16ac7293c12.scope - libcontainer container b011534012c1754c1138a56e9e84e4b4f3d98c2845a9df6ce2f5a16ac7293c12. Jun 20 19:16:10.262606 containerd[1730]: time="2025-06-20T19:16:10.262461278Z" level=info msg="StartContainer for \"b011534012c1754c1138a56e9e84e4b4f3d98c2845a9df6ce2f5a16ac7293c12\" returns successfully" Jun 20 19:16:10.533188 kubelet[2753]: I0620 19:16:10.532899 2753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:11.659326 kubelet[2753]: E0620 19:16:11.659256 2753 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-a13bac16ef\" not found" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:11.820638 kubelet[2753]: I0620 19:16:11.820594 2753 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:11.944505 kubelet[2753]: I0620 19:16:11.944478 2753 apiserver.go:52] "Watching apiserver" Jun 20 19:16:11.948700 kubelet[2753]: I0620 19:16:11.948657 2753 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:16:12.038528 kubelet[2753]: E0620 19:16:12.038307 2753 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:12.038528 kubelet[2753]: E0620 19:16:12.038346 2753 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4344.1.0-a-a13bac16ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:12.038528 kubelet[2753]: E0620 19:16:12.038307 2753 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:13.042031 kubelet[2753]: W0620 19:16:13.041992 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:16:13.309537 kubelet[2753]: W0620 19:16:13.309389 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:16:13.883223 systemd[1]: Reload requested from client PID 3025 ('systemctl') (unit session-9.scope)... Jun 20 19:16:13.883238 systemd[1]: Reloading... Jun 20 19:16:13.966499 zram_generator::config[3070]: No configuration found. Jun 20 19:16:14.055716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:16:14.167310 systemd[1]: Reloading finished in 283 ms. Jun 20 19:16:14.193062 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:14.211474 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:16:14.211720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:14.211780 systemd[1]: kubelet.service: Consumed 828ms CPU time, 130.6M memory peak. Jun 20 19:16:14.213638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:15.146414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:15.155774 (kubelet)[3138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:16:15.217847 kubelet[3138]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:16:15.217847 kubelet[3138]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:16:15.217847 kubelet[3138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:16:15.218245 kubelet[3138]: I0620 19:16:15.217899 3138 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:16:15.228932 kubelet[3138]: I0620 19:16:15.228748 3138 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:16:15.228932 kubelet[3138]: I0620 19:16:15.228775 3138 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:16:15.229083 kubelet[3138]: I0620 19:16:15.229008 3138 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:16:15.230022 kubelet[3138]: I0620 19:16:15.230000 3138 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:16:15.231954 kubelet[3138]: I0620 19:16:15.231632 3138 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:16:15.236895 kubelet[3138]: I0620 19:16:15.236865 3138 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:16:15.239727 kubelet[3138]: I0620 19:16:15.239700 3138 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:16:15.239844 kubelet[3138]: I0620 19:16:15.239790 3138 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:16:15.239921 kubelet[3138]: I0620 19:16:15.239896 3138 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:16:15.240075 kubelet[3138]: I0620 19:16:15.239923 3138 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-a13bac16ef","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:16:15.240169 kubelet[3138]: I0620 19:16:15.240083 3138 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:16:15.240169 kubelet[3138]: I0620 19:16:15.240093 3138 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:16:15.240169 kubelet[3138]: I0620 19:16:15.240119 3138 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:16:15.240235 kubelet[3138]: I0620 19:16:15.240208 3138 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:16:15.240235 kubelet[3138]: I0620 19:16:15.240227 3138 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:16:15.240489 kubelet[3138]: I0620 19:16:15.240280 3138 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:16:15.240489 kubelet[3138]: I0620 19:16:15.240301 3138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:16:15.246115 kubelet[3138]: I0620 19:16:15.245689 3138 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:16:15.248922 kubelet[3138]: I0620 19:16:15.246982 3138 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:16:15.251873 kubelet[3138]: I0620 19:16:15.251854 3138 server.go:1274] "Started kubelet" Jun 20 19:16:15.258405 kubelet[3138]: I0620 19:16:15.257781 3138 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:16:15.258731 kubelet[3138]: I0620 19:16:15.258687 3138 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:16:15.259918 kubelet[3138]: I0620 19:16:15.259727 3138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:16:15.261592 kubelet[3138]: I0620 19:16:15.261573 3138 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:16:15.261700 kubelet[3138]: E0620 19:16:15.261686 3138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a13bac16ef\" not found" Jun 20 19:16:15.267514 kubelet[3138]: I0620 19:16:15.266708 3138 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:16:15.267514 kubelet[3138]: I0620 19:16:15.266830 3138 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:16:15.267514 kubelet[3138]: I0620 19:16:15.267338 3138 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:16:15.268584 kubelet[3138]: I0620 19:16:15.268237 3138 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:16:15.268584 kubelet[3138]: I0620 19:16:15.268397 3138 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:16:15.269843 kubelet[3138]: I0620 19:16:15.269351 3138 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:16:15.269843 kubelet[3138]: I0620 19:16:15.269451 3138 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:16:15.276402 kubelet[3138]: I0620 19:16:15.276292 3138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:16:15.277289 kubelet[3138]: I0620 19:16:15.277269 3138 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:16:15.279454 kubelet[3138]: I0620 19:16:15.278802 3138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:16:15.279454 kubelet[3138]: I0620 19:16:15.278823 3138 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:16:15.279454 kubelet[3138]: I0620 19:16:15.278841 3138 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:16:15.279454 kubelet[3138]: E0620 19:16:15.278882 3138 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:16:15.289474 kubelet[3138]: E0620 19:16:15.289231 3138 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:16:15.300289 sudo[3166]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:16:15.300989 sudo[3166]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:16:15.344566 kubelet[3138]: I0620 19:16:15.344112 3138 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:16:15.344566 kubelet[3138]: I0620 19:16:15.344128 3138 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:16:15.344566 kubelet[3138]: I0620 19:16:15.344146 3138 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:16:15.344566 kubelet[3138]: I0620 19:16:15.344285 3138 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:16:15.344566 kubelet[3138]: I0620 19:16:15.344295 3138 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:16:15.344566 kubelet[3138]: I0620 19:16:15.344324 3138 policy_none.go:49] "None policy: Start" Jun 20 19:16:15.345327 kubelet[3138]: I0620 19:16:15.345308 3138 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:16:15.345398 kubelet[3138]: I0620 19:16:15.345393 3138 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:16:15.345664 kubelet[3138]: I0620 19:16:15.345600 3138 state_mem.go:75] "Updated machine memory state" Jun 20 19:16:15.350607 kubelet[3138]: I0620 19:16:15.350587 3138 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:16:15.351487 kubelet[3138]: I0620 19:16:15.351179 3138 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:16:15.351487 kubelet[3138]: I0620 19:16:15.351193 3138 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:16:15.354344 kubelet[3138]: I0620 19:16:15.354331 3138 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:16:15.390257 kubelet[3138]: W0620 19:16:15.390231 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:16:15.395092 kubelet[3138]: W0620 19:16:15.394965 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:16:15.396351 kubelet[3138]: W0620 19:16:15.395327 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:16:15.396351 kubelet[3138]: E0620 19:16:15.396303 3138 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.396621 kubelet[3138]: E0620 19:16:15.396504 3138 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.460046 kubelet[3138]: I0620 19:16:15.460019 3138 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.475458 kubelet[3138]: I0620 19:16:15.475261 3138 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.476058 kubelet[3138]: I0620 19:16:15.475834 3138 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.568364 kubelet[3138]: I0620 19:16:15.568269 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.568819 kubelet[3138]: I0620 19:16:15.568685 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebba66f99866046963b63963adfe95ac-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" (UID: \"ebba66f99866046963b63963adfe95ac\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.569027 kubelet[3138]: I0620 19:16:15.568892 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebba66f99866046963b63963adfe95ac-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" (UID: \"ebba66f99866046963b63963adfe95ac\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.569027 kubelet[3138]: I0620 19:16:15.568914 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebba66f99866046963b63963adfe95ac-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" (UID: \"ebba66f99866046963b63963adfe95ac\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.569175 kubelet[3138]: I0620 19:16:15.569006 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.569175 kubelet[3138]: I0620 19:16:15.569121 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.569175 kubelet[3138]: I0620 19:16:15.569137 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78835c9fc1b7b3cd7581c20e4ced4600-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-a13bac16ef\" (UID: \"78835c9fc1b7b3cd7581c20e4ced4600\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.569175 kubelet[3138]: I0620 19:16:15.569152 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.569658 kubelet[3138]: I0620 19:16:15.569545 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80d23c07b1c7bff131ad3fb5a5e01f66-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-a13bac16ef\" (UID: \"80d23c07b1c7bff131ad3fb5a5e01f66\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:15.874006 sudo[3166]: pam_unix(sudo:session): session closed for user root Jun 20 19:16:16.242450 kubelet[3138]: I0620 19:16:16.242301 3138 apiserver.go:52] "Watching apiserver" Jun 20 19:16:16.268449 kubelet[3138]: I0620 19:16:16.267696 3138 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:16:16.337356 kubelet[3138]: W0620 19:16:16.337310 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:16:16.337688 kubelet[3138]: E0620 19:16:16.337597 3138 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.0-a-a13bac16ef\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" Jun 20 19:16:16.341466 kubelet[3138]: I0620 19:16:16.341389 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a13bac16ef" podStartSLOduration=3.341372162 podStartE2EDuration="3.341372162s" podCreationTimestamp="2025-06-20 19:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:16.341163128 +0000 UTC m=+1.177469689" watchObservedRunningTime="2025-06-20 19:16:16.341372162 +0000 UTC m=+1.177678722" Jun 20 19:16:16.359860 kubelet[3138]: I0620 19:16:16.359760 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a13bac16ef" podStartSLOduration=1.359741881 podStartE2EDuration="1.359741881s" podCreationTimestamp="2025-06-20 19:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:16.359505211 +0000 UTC m=+1.195811800" watchObservedRunningTime="2025-06-20 19:16:16.359741881 +0000 UTC m=+1.196048442" Jun 20 19:16:16.394966 kubelet[3138]: I0620 19:16:16.394901 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a13bac16ef" podStartSLOduration=3.39483254 podStartE2EDuration="3.39483254s" podCreationTimestamp="2025-06-20 19:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:16.380113433 +0000 UTC m=+1.216419988" watchObservedRunningTime="2025-06-20 19:16:16.39483254 +0000 UTC m=+1.231139096" Jun 20 19:16:17.241304 sudo[2179]: pam_unix(sudo:session): session closed for user root Jun 20 19:16:17.334540 sshd[2178]: Connection closed by 10.200.16.10 port 39832 Jun 20 19:16:17.335098 sshd-session[2176]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:17.337870 systemd[1]: sshd@6-10.200.4.21:22-10.200.16.10:39832.service: Deactivated successfully. Jun 20 19:16:17.339912 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:16:17.340083 systemd[1]: session-9.scope: Consumed 3.996s CPU time, 268.7M memory peak. Jun 20 19:16:17.342657 systemd-logind[1706]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:16:17.343572 systemd-logind[1706]: Removed session 9. Jun 20 19:16:19.715251 kubelet[3138]: I0620 19:16:19.715199 3138 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:16:19.715968 containerd[1730]: time="2025-06-20T19:16:19.715545092Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:16:19.716659 kubelet[3138]: I0620 19:16:19.716168 3138 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:16:20.653071 systemd[1]: Created slice kubepods-besteffort-podf81939a6_fe9b_4991_a5fd_85599e9a7480.slice - libcontainer container kubepods-besteffort-podf81939a6_fe9b_4991_a5fd_85599e9a7480.slice. Jun 20 19:16:20.664934 systemd[1]: Created slice kubepods-burstable-pod029e13ff_af0a_43fc_a0d3_2fb127fc90fa.slice - libcontainer container kubepods-burstable-pod029e13ff_af0a_43fc_a0d3_2fb127fc90fa.slice. Jun 20 19:16:20.702679 kubelet[3138]: I0620 19:16:20.702400 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f81939a6-fe9b-4991-a5fd-85599e9a7480-xtables-lock\") pod \"kube-proxy-sg7ch\" (UID: \"f81939a6-fe9b-4991-a5fd-85599e9a7480\") " pod="kube-system/kube-proxy-sg7ch" Jun 20 19:16:20.702679 kubelet[3138]: I0620 19:16:20.702455 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cni-path\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702679 kubelet[3138]: I0620 19:16:20.702476 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-lib-modules\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702679 kubelet[3138]: I0620 19:16:20.702494 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-clustermesh-secrets\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702679 kubelet[3138]: I0620 19:16:20.702514 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-host-proc-sys-kernel\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702679 kubelet[3138]: I0620 19:16:20.702539 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-cgroup\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702968 kubelet[3138]: I0620 19:16:20.702561 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-etc-cni-netd\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702968 kubelet[3138]: I0620 19:16:20.702579 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-config-path\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702968 kubelet[3138]: I0620 19:16:20.702604 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-hostproc\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702968 kubelet[3138]: I0620 19:16:20.702629 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-hubble-tls\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.702968 kubelet[3138]: I0620 19:16:20.702657 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9x7w\" (UniqueName: \"kubernetes.io/projected/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-kube-api-access-q9x7w\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.703080 kubelet[3138]: I0620 19:16:20.702692 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ds2t\" (UniqueName: \"kubernetes.io/projected/f81939a6-fe9b-4991-a5fd-85599e9a7480-kube-api-access-5ds2t\") pod \"kube-proxy-sg7ch\" (UID: \"f81939a6-fe9b-4991-a5fd-85599e9a7480\") " pod="kube-system/kube-proxy-sg7ch" Jun 20 19:16:20.703080 kubelet[3138]: I0620 19:16:20.702715 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-run\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.703080 kubelet[3138]: I0620 19:16:20.702742 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-xtables-lock\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.703080 kubelet[3138]: I0620 19:16:20.702757 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f81939a6-fe9b-4991-a5fd-85599e9a7480-kube-proxy\") pod \"kube-proxy-sg7ch\" (UID: \"f81939a6-fe9b-4991-a5fd-85599e9a7480\") " pod="kube-system/kube-proxy-sg7ch" Jun 20 19:16:20.703080 kubelet[3138]: I0620 19:16:20.702775 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f81939a6-fe9b-4991-a5fd-85599e9a7480-lib-modules\") pod \"kube-proxy-sg7ch\" (UID: \"f81939a6-fe9b-4991-a5fd-85599e9a7480\") " pod="kube-system/kube-proxy-sg7ch" Jun 20 19:16:20.703080 kubelet[3138]: I0620 19:16:20.702807 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-bpf-maps\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.703213 kubelet[3138]: I0620 19:16:20.702822 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-host-proc-sys-net\") pod \"cilium-zwndf\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " pod="kube-system/cilium-zwndf" Jun 20 19:16:20.857404 systemd[1]: Created slice kubepods-besteffort-pod3af6c32a_b96f_4c57_b297_8fcd646d4e3f.slice - libcontainer container kubepods-besteffort-pod3af6c32a_b96f_4c57_b297_8fcd646d4e3f.slice. Jun 20 19:16:20.903698 kubelet[3138]: I0620 19:16:20.903657 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3af6c32a-b96f-4c57-b297-8fcd646d4e3f-cilium-config-path\") pod \"cilium-operator-5d85765b45-m7vqk\" (UID: \"3af6c32a-b96f-4c57-b297-8fcd646d4e3f\") " pod="kube-system/cilium-operator-5d85765b45-m7vqk" Jun 20 19:16:20.904093 kubelet[3138]: I0620 19:16:20.903721 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kv9w\" (UniqueName: \"kubernetes.io/projected/3af6c32a-b96f-4c57-b297-8fcd646d4e3f-kube-api-access-6kv9w\") pod \"cilium-operator-5d85765b45-m7vqk\" (UID: \"3af6c32a-b96f-4c57-b297-8fcd646d4e3f\") " pod="kube-system/cilium-operator-5d85765b45-m7vqk" Jun 20 19:16:20.960522 containerd[1730]: time="2025-06-20T19:16:20.960479735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sg7ch,Uid:f81939a6-fe9b-4991-a5fd-85599e9a7480,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:20.973548 containerd[1730]: time="2025-06-20T19:16:20.973495536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zwndf,Uid:029e13ff-af0a-43fc-a0d3-2fb127fc90fa,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:21.018458 containerd[1730]: time="2025-06-20T19:16:21.018229884Z" level=info msg="connecting to shim 3cbef149adef1b3aa21739d561aa0dead92bb7de7c20e0df440abc3d9866135b" address="unix:///run/containerd/s/49d26c4d2a3f3330ea44e82bdf92332afdeca1365cd3d31494d8df342bb15dad" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:21.035629 containerd[1730]: time="2025-06-20T19:16:21.035586518Z" level=info msg="connecting to shim 7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd" address="unix:///run/containerd/s/8aece47aea69dc0fcbefb9a6614f6bffb740998aabfa6d937905b8f1e55173f6" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:21.040864 systemd[1]: Started cri-containerd-3cbef149adef1b3aa21739d561aa0dead92bb7de7c20e0df440abc3d9866135b.scope - libcontainer container 3cbef149adef1b3aa21739d561aa0dead92bb7de7c20e0df440abc3d9866135b. Jun 20 19:16:21.065618 systemd[1]: Started cri-containerd-7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd.scope - libcontainer container 7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd. Jun 20 19:16:21.079705 containerd[1730]: time="2025-06-20T19:16:21.079664261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sg7ch,Uid:f81939a6-fe9b-4991-a5fd-85599e9a7480,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cbef149adef1b3aa21739d561aa0dead92bb7de7c20e0df440abc3d9866135b\"" Jun 20 19:16:21.087161 containerd[1730]: time="2025-06-20T19:16:21.085383758Z" level=info msg="CreateContainer within sandbox \"3cbef149adef1b3aa21739d561aa0dead92bb7de7c20e0df440abc3d9866135b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:16:21.095647 containerd[1730]: time="2025-06-20T19:16:21.095615973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zwndf,Uid:029e13ff-af0a-43fc-a0d3-2fb127fc90fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\"" Jun 20 19:16:21.097199 containerd[1730]: time="2025-06-20T19:16:21.097171205Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:16:21.112389 containerd[1730]: time="2025-06-20T19:16:21.112358138Z" level=info msg="Container 75ef5357849f41ddc1841ee209076a0cefc554e08d90caadded7b1c708549e66: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:21.128258 containerd[1730]: time="2025-06-20T19:16:21.127893638Z" level=info msg="CreateContainer within sandbox \"3cbef149adef1b3aa21739d561aa0dead92bb7de7c20e0df440abc3d9866135b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"75ef5357849f41ddc1841ee209076a0cefc554e08d90caadded7b1c708549e66\"" Jun 20 19:16:21.131839 containerd[1730]: time="2025-06-20T19:16:21.131807714Z" level=info msg="StartContainer for \"75ef5357849f41ddc1841ee209076a0cefc554e08d90caadded7b1c708549e66\"" Jun 20 19:16:21.134097 containerd[1730]: time="2025-06-20T19:16:21.134063300Z" level=info msg="connecting to shim 75ef5357849f41ddc1841ee209076a0cefc554e08d90caadded7b1c708549e66" address="unix:///run/containerd/s/49d26c4d2a3f3330ea44e82bdf92332afdeca1365cd3d31494d8df342bb15dad" protocol=ttrpc version=3 Jun 20 19:16:21.149618 systemd[1]: Started cri-containerd-75ef5357849f41ddc1841ee209076a0cefc554e08d90caadded7b1c708549e66.scope - libcontainer container 75ef5357849f41ddc1841ee209076a0cefc554e08d90caadded7b1c708549e66. Jun 20 19:16:21.164458 containerd[1730]: time="2025-06-20T19:16:21.164213051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m7vqk,Uid:3af6c32a-b96f-4c57-b297-8fcd646d4e3f,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:21.185346 containerd[1730]: time="2025-06-20T19:16:21.185314437Z" level=info msg="StartContainer for \"75ef5357849f41ddc1841ee209076a0cefc554e08d90caadded7b1c708549e66\" returns successfully" Jun 20 19:16:21.207360 containerd[1730]: time="2025-06-20T19:16:21.207315152Z" level=info msg="connecting to shim b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68" address="unix:///run/containerd/s/25a82f54474a3192f7b93c705936b8b82fdb735fdb89395503adffd17e06c685" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:21.232636 systemd[1]: Started cri-containerd-b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68.scope - libcontainer container b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68. Jun 20 19:16:21.278099 containerd[1730]: time="2025-06-20T19:16:21.278056699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m7vqk,Uid:3af6c32a-b96f-4c57-b297-8fcd646d4e3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\"" Jun 20 19:16:26.578562 kubelet[3138]: I0620 19:16:26.578288 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sg7ch" podStartSLOduration=6.57741378 podStartE2EDuration="6.57741378s" podCreationTimestamp="2025-06-20 19:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:21.340471513 +0000 UTC m=+6.176778080" watchObservedRunningTime="2025-06-20 19:16:26.57741378 +0000 UTC m=+11.413720347" Jun 20 19:16:28.676963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747046976.mount: Deactivated successfully. Jun 20 19:16:30.224224 containerd[1730]: time="2025-06-20T19:16:30.224172022Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:30.226392 containerd[1730]: time="2025-06-20T19:16:30.226352158Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:16:30.229189 containerd[1730]: time="2025-06-20T19:16:30.229149511Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:30.230452 containerd[1730]: time="2025-06-20T19:16:30.230103967Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.132902204s" Jun 20 19:16:30.230452 containerd[1730]: time="2025-06-20T19:16:30.230138817Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:16:30.231278 containerd[1730]: time="2025-06-20T19:16:30.231251725Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:16:30.233243 containerd[1730]: time="2025-06-20T19:16:30.232716645Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:16:30.251879 containerd[1730]: time="2025-06-20T19:16:30.251848185Z" level=info msg="Container 5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:30.272130 containerd[1730]: time="2025-06-20T19:16:30.272089760Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\"" Jun 20 19:16:30.272655 containerd[1730]: time="2025-06-20T19:16:30.272632632Z" level=info msg="StartContainer for \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\"" Jun 20 19:16:30.274190 containerd[1730]: time="2025-06-20T19:16:30.273556110Z" level=info msg="connecting to shim 5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9" address="unix:///run/containerd/s/8aece47aea69dc0fcbefb9a6614f6bffb740998aabfa6d937905b8f1e55173f6" protocol=ttrpc version=3 Jun 20 19:16:30.298614 systemd[1]: Started cri-containerd-5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9.scope - libcontainer container 5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9. Jun 20 19:16:30.327723 containerd[1730]: time="2025-06-20T19:16:30.327686727Z" level=info msg="StartContainer for \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" returns successfully" Jun 20 19:16:30.335147 systemd[1]: cri-containerd-5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9.scope: Deactivated successfully. Jun 20 19:16:30.338597 containerd[1730]: time="2025-06-20T19:16:30.338542244Z" level=info msg="received exit event container_id:\"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" id:\"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" pid:3550 exited_at:{seconds:1750446990 nanos:338083370}" Jun 20 19:16:30.338597 containerd[1730]: time="2025-06-20T19:16:30.338574586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" id:\"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" pid:3550 exited_at:{seconds:1750446990 nanos:338083370}" Jun 20 19:16:30.362321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9-rootfs.mount: Deactivated successfully. Jun 20 19:16:40.339236 containerd[1730]: time="2025-06-20T19:16:40.339138937Z" level=error msg="failed to handle container TaskExit event container_id:\"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" id:\"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" pid:3550 exited_at:{seconds:1750446990 nanos:338083370}" error="failed to stop container: failed to delete task: context deadline exceeded" Jun 20 19:16:41.799993 containerd[1730]: time="2025-06-20T19:16:41.799932264Z" level=info msg="TaskExit event container_id:\"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" id:\"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" pid:3550 exited_at:{seconds:1750446990 nanos:338083370}" Jun 20 19:16:43.800646 containerd[1730]: time="2025-06-20T19:16:43.800578004Z" level=error msg="get state for 5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9" error="context deadline exceeded" Jun 20 19:16:43.800646 containerd[1730]: time="2025-06-20T19:16:43.800638912Z" level=warning msg="unknown status" status=0 Jun 20 19:16:45.801575 containerd[1730]: time="2025-06-20T19:16:45.801513479Z" level=error msg="get state for 5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9" error="context deadline exceeded" Jun 20 19:16:45.801575 containerd[1730]: time="2025-06-20T19:16:45.801560548Z" level=warning msg="unknown status" status=0 Jun 20 19:16:46.768104 containerd[1730]: time="2025-06-20T19:16:46.767992732Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Jun 20 19:16:46.768104 containerd[1730]: time="2025-06-20T19:16:46.768082049Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Jun 20 19:16:46.768104 containerd[1730]: time="2025-06-20T19:16:46.768094324Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jun 20 19:16:46.769265 containerd[1730]: time="2025-06-20T19:16:46.769234682Z" level=info msg="Ensure that container 5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9 in task-service has been cleanup successfully" Jun 20 19:16:47.418014 containerd[1730]: time="2025-06-20T19:16:47.417966424Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:16:48.239876 containerd[1730]: time="2025-06-20T19:16:48.239807383Z" level=info msg="Container 88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:48.243232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064093322.mount: Deactivated successfully. Jun 20 19:16:48.256411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249081615.mount: Deactivated successfully. Jun 20 19:16:48.296229 containerd[1730]: time="2025-06-20T19:16:48.296190287Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\"" Jun 20 19:16:48.296763 containerd[1730]: time="2025-06-20T19:16:48.296726024Z" level=info msg="StartContainer for \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\"" Jun 20 19:16:48.297678 containerd[1730]: time="2025-06-20T19:16:48.297650832Z" level=info msg="connecting to shim 88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0" address="unix:///run/containerd/s/8aece47aea69dc0fcbefb9a6614f6bffb740998aabfa6d937905b8f1e55173f6" protocol=ttrpc version=3 Jun 20 19:16:48.320640 systemd[1]: Started cri-containerd-88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0.scope - libcontainer container 88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0. Jun 20 19:16:48.355805 containerd[1730]: time="2025-06-20T19:16:48.355766155Z" level=info msg="StartContainer for \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\" returns successfully" Jun 20 19:16:48.370120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:16:48.370345 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:16:48.370756 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:16:48.373035 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:16:48.374543 systemd[1]: cri-containerd-88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0.scope: Deactivated successfully. Jun 20 19:16:48.376486 containerd[1730]: time="2025-06-20T19:16:48.375379368Z" level=info msg="received exit event container_id:\"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\" id:\"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\" pid:3604 exited_at:{seconds:1750447008 nanos:375191175}" Jun 20 19:16:48.376486 containerd[1730]: time="2025-06-20T19:16:48.375627320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\" id:\"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\" pid:3604 exited_at:{seconds:1750447008 nanos:375191175}" Jun 20 19:16:48.404278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:16:48.765900 containerd[1730]: time="2025-06-20T19:16:48.765855957Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:48.768220 containerd[1730]: time="2025-06-20T19:16:48.768178744Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:16:48.770684 containerd[1730]: time="2025-06-20T19:16:48.770640441Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:48.771621 containerd[1730]: time="2025-06-20T19:16:48.771508237Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 18.540222755s" Jun 20 19:16:48.771621 containerd[1730]: time="2025-06-20T19:16:48.771543165Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:16:48.773698 containerd[1730]: time="2025-06-20T19:16:48.773644710Z" level=info msg="CreateContainer within sandbox \"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:16:48.793413 containerd[1730]: time="2025-06-20T19:16:48.793378604Z" level=info msg="Container bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:48.809107 containerd[1730]: time="2025-06-20T19:16:48.809072561Z" level=info msg="CreateContainer within sandbox \"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\"" Jun 20 19:16:48.809672 containerd[1730]: time="2025-06-20T19:16:48.809626888Z" level=info msg="StartContainer for \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\"" Jun 20 19:16:48.810988 containerd[1730]: time="2025-06-20T19:16:48.810727298Z" level=info msg="connecting to shim bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041" address="unix:///run/containerd/s/25a82f54474a3192f7b93c705936b8b82fdb735fdb89395503adffd17e06c685" protocol=ttrpc version=3 Jun 20 19:16:48.830601 systemd[1]: Started cri-containerd-bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041.scope - libcontainer container bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041. Jun 20 19:16:48.858824 containerd[1730]: time="2025-06-20T19:16:48.858761081Z" level=info msg="StartContainer for \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" returns successfully" Jun 20 19:16:49.242930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0-rootfs.mount: Deactivated successfully. Jun 20 19:16:49.400203 containerd[1730]: time="2025-06-20T19:16:49.400160825Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:16:49.433681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount745204977.mount: Deactivated successfully. Jun 20 19:16:49.434758 containerd[1730]: time="2025-06-20T19:16:49.434716995Z" level=info msg="Container b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:49.464315 containerd[1730]: time="2025-06-20T19:16:49.464255079Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\"" Jun 20 19:16:49.465079 containerd[1730]: time="2025-06-20T19:16:49.465044447Z" level=info msg="StartContainer for \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\"" Jun 20 19:16:49.467221 containerd[1730]: time="2025-06-20T19:16:49.467181437Z" level=info msg="connecting to shim b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a" address="unix:///run/containerd/s/8aece47aea69dc0fcbefb9a6614f6bffb740998aabfa6d937905b8f1e55173f6" protocol=ttrpc version=3 Jun 20 19:16:49.502600 systemd[1]: Started cri-containerd-b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a.scope - libcontainer container b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a. Jun 20 19:16:49.532085 kubelet[3138]: I0620 19:16:49.531606 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-m7vqk" podStartSLOduration=2.03976296 podStartE2EDuration="29.531583194s" podCreationTimestamp="2025-06-20 19:16:20 +0000 UTC" firstStartedPulling="2025-06-20 19:16:21.280503913 +0000 UTC m=+6.116810460" lastFinishedPulling="2025-06-20 19:16:48.772324146 +0000 UTC m=+33.608630694" observedRunningTime="2025-06-20 19:16:49.462511739 +0000 UTC m=+34.298818303" watchObservedRunningTime="2025-06-20 19:16:49.531583194 +0000 UTC m=+34.367889747" Jun 20 19:16:49.597861 containerd[1730]: time="2025-06-20T19:16:49.597656830Z" level=info msg="StartContainer for \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\" returns successfully" Jun 20 19:16:49.603844 systemd[1]: cri-containerd-b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a.scope: Deactivated successfully. Jun 20 19:16:49.608822 containerd[1730]: time="2025-06-20T19:16:49.608788449Z" level=info msg="received exit event container_id:\"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\" id:\"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\" pid:3691 exited_at:{seconds:1750447009 nanos:608256161}" Jun 20 19:16:49.610562 containerd[1730]: time="2025-06-20T19:16:49.610521315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\" id:\"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\" pid:3691 exited_at:{seconds:1750447009 nanos:608256161}" Jun 20 19:16:50.238085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a-rootfs.mount: Deactivated successfully. Jun 20 19:16:50.405256 containerd[1730]: time="2025-06-20T19:16:50.404503034Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:16:50.426453 containerd[1730]: time="2025-06-20T19:16:50.426123903Z" level=info msg="Container 2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:50.445047 containerd[1730]: time="2025-06-20T19:16:50.445007510Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\"" Jun 20 19:16:50.445929 containerd[1730]: time="2025-06-20T19:16:50.445904900Z" level=info msg="StartContainer for \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\"" Jun 20 19:16:50.446892 containerd[1730]: time="2025-06-20T19:16:50.446864086Z" level=info msg="connecting to shim 2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c" address="unix:///run/containerd/s/8aece47aea69dc0fcbefb9a6614f6bffb740998aabfa6d937905b8f1e55173f6" protocol=ttrpc version=3 Jun 20 19:16:50.464976 systemd[1]: Started cri-containerd-2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c.scope - libcontainer container 2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c. Jun 20 19:16:50.488359 systemd[1]: cri-containerd-2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c.scope: Deactivated successfully. Jun 20 19:16:50.489698 containerd[1730]: time="2025-06-20T19:16:50.489663553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\" id:\"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\" pid:3730 exited_at:{seconds:1750447010 nanos:489180232}" Jun 20 19:16:50.493229 containerd[1730]: time="2025-06-20T19:16:50.493151234Z" level=info msg="received exit event container_id:\"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\" id:\"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\" pid:3730 exited_at:{seconds:1750447010 nanos:489180232}" Jun 20 19:16:50.499116 containerd[1730]: time="2025-06-20T19:16:50.499088695Z" level=info msg="StartContainer for \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\" returns successfully" Jun 20 19:16:50.510252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c-rootfs.mount: Deactivated successfully. Jun 20 19:16:51.408367 containerd[1730]: time="2025-06-20T19:16:51.408257737Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:16:51.433298 containerd[1730]: time="2025-06-20T19:16:51.431120618Z" level=info msg="Container 8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:51.447770 containerd[1730]: time="2025-06-20T19:16:51.447726991Z" level=info msg="CreateContainer within sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\"" Jun 20 19:16:51.448283 containerd[1730]: time="2025-06-20T19:16:51.448234163Z" level=info msg="StartContainer for \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\"" Jun 20 19:16:51.449368 containerd[1730]: time="2025-06-20T19:16:51.449285646Z" level=info msg="connecting to shim 8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3" address="unix:///run/containerd/s/8aece47aea69dc0fcbefb9a6614f6bffb740998aabfa6d937905b8f1e55173f6" protocol=ttrpc version=3 Jun 20 19:16:51.466598 systemd[1]: Started cri-containerd-8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3.scope - libcontainer container 8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3. Jun 20 19:16:51.498226 containerd[1730]: time="2025-06-20T19:16:51.498185816Z" level=info msg="StartContainer for \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" returns successfully" Jun 20 19:16:51.572310 containerd[1730]: time="2025-06-20T19:16:51.572238517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" id:\"f94dc039ac0262887bca1a39dc3bf139091d984aec9a4448246619cd36683f92\" pid:3803 exited_at:{seconds:1750447011 nanos:570941838}" Jun 20 19:16:51.647580 kubelet[3138]: I0620 19:16:51.646492 3138 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 19:16:51.693102 systemd[1]: Created slice kubepods-burstable-poda61266b8_f34d_4462_95fd_b7118b483dcb.slice - libcontainer container kubepods-burstable-poda61266b8_f34d_4462_95fd_b7118b483dcb.slice. Jun 20 19:16:51.704787 systemd[1]: Created slice kubepods-burstable-podb592f86f_44fc_4c77_a590_58750385d672.slice - libcontainer container kubepods-burstable-podb592f86f_44fc_4c77_a590_58750385d672.slice. Jun 20 19:16:51.784806 kubelet[3138]: I0620 19:16:51.784754 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq87m\" (UniqueName: \"kubernetes.io/projected/a61266b8-f34d-4462-95fd-b7118b483dcb-kube-api-access-kq87m\") pod \"coredns-7c65d6cfc9-n28mf\" (UID: \"a61266b8-f34d-4462-95fd-b7118b483dcb\") " pod="kube-system/coredns-7c65d6cfc9-n28mf" Jun 20 19:16:51.784806 kubelet[3138]: I0620 19:16:51.784810 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a61266b8-f34d-4462-95fd-b7118b483dcb-config-volume\") pod \"coredns-7c65d6cfc9-n28mf\" (UID: \"a61266b8-f34d-4462-95fd-b7118b483dcb\") " pod="kube-system/coredns-7c65d6cfc9-n28mf" Jun 20 19:16:51.886062 kubelet[3138]: I0620 19:16:51.886009 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8r57\" (UniqueName: \"kubernetes.io/projected/b592f86f-44fc-4c77-a590-58750385d672-kube-api-access-k8r57\") pod \"coredns-7c65d6cfc9-94qdh\" (UID: \"b592f86f-44fc-4c77-a590-58750385d672\") " pod="kube-system/coredns-7c65d6cfc9-94qdh" Jun 20 19:16:51.886062 kubelet[3138]: I0620 19:16:51.886063 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b592f86f-44fc-4c77-a590-58750385d672-config-volume\") pod \"coredns-7c65d6cfc9-94qdh\" (UID: \"b592f86f-44fc-4c77-a590-58750385d672\") " pod="kube-system/coredns-7c65d6cfc9-94qdh" Jun 20 19:16:52.001341 containerd[1730]: time="2025-06-20T19:16:52.001238521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n28mf,Uid:a61266b8-f34d-4462-95fd-b7118b483dcb,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:52.008493 containerd[1730]: time="2025-06-20T19:16:52.008416886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-94qdh,Uid:b592f86f-44fc-4c77-a590-58750385d672,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:52.430202 kubelet[3138]: I0620 19:16:52.430142 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zwndf" podStartSLOduration=23.295676747 podStartE2EDuration="32.430121986s" podCreationTimestamp="2025-06-20 19:16:20 +0000 UTC" firstStartedPulling="2025-06-20 19:16:21.096641564 +0000 UTC m=+5.932948123" lastFinishedPulling="2025-06-20 19:16:30.231086804 +0000 UTC m=+15.067393362" observedRunningTime="2025-06-20 19:16:52.429886333 +0000 UTC m=+37.266192931" watchObservedRunningTime="2025-06-20 19:16:52.430121986 +0000 UTC m=+37.266428562" Jun 20 19:16:53.688378 systemd-networkd[1368]: cilium_host: Link UP Jun 20 19:16:53.689495 systemd-networkd[1368]: cilium_net: Link UP Jun 20 19:16:53.689624 systemd-networkd[1368]: cilium_net: Gained carrier Jun 20 19:16:53.689717 systemd-networkd[1368]: cilium_host: Gained carrier Jun 20 19:16:53.840517 systemd-networkd[1368]: cilium_vxlan: Link UP Jun 20 19:16:53.840529 systemd-networkd[1368]: cilium_vxlan: Gained carrier Jun 20 19:16:54.041611 systemd-networkd[1368]: cilium_host: Gained IPv6LL Jun 20 19:16:54.061538 kernel: NET: Registered PF_ALG protocol family Jun 20 19:16:54.273614 systemd-networkd[1368]: cilium_net: Gained IPv6LL Jun 20 19:16:54.568293 systemd-networkd[1368]: lxc_health: Link UP Jun 20 19:16:54.569457 systemd-networkd[1368]: lxc_health: Gained carrier Jun 20 19:16:55.041804 kernel: eth0: renamed from tmpcd7f1 Jun 20 19:16:55.040181 systemd-networkd[1368]: lxc8ad8d14f8c22: Link UP Jun 20 19:16:55.044867 systemd-networkd[1368]: lxc8ad8d14f8c22: Gained carrier Jun 20 19:16:55.077391 systemd-networkd[1368]: lxcaaec22f2dba8: Link UP Jun 20 19:16:55.077559 kernel: eth0: renamed from tmp6e41f Jun 20 19:16:55.081027 systemd-networkd[1368]: lxcaaec22f2dba8: Gained carrier Jun 20 19:16:55.809808 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Jun 20 19:16:55.937642 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jun 20 19:16:56.897609 systemd-networkd[1368]: lxc8ad8d14f8c22: Gained IPv6LL Jun 20 19:16:56.961872 systemd-networkd[1368]: lxcaaec22f2dba8: Gained IPv6LL Jun 20 19:16:58.102183 containerd[1730]: time="2025-06-20T19:16:58.102125005Z" level=info msg="connecting to shim cd7f10877c6075ad0eb60070507c43cb502785e32a388aee5541312dc5e1e34a" address="unix:///run/containerd/s/82a387d53520f1b6eeaa74996de19a339a4353faa8591d82f39947177af15fcd" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:58.107168 containerd[1730]: time="2025-06-20T19:16:58.107109070Z" level=info msg="connecting to shim 6e41fc0cbde67b180537c5bf50c836b74de96af98152bfa77cbaec8f664c49d2" address="unix:///run/containerd/s/ddea063cd27be2669e34001f15a9068ac8bf369b6ef1d45be9b048904d64ba68" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:58.139580 systemd[1]: Started cri-containerd-cd7f10877c6075ad0eb60070507c43cb502785e32a388aee5541312dc5e1e34a.scope - libcontainer container cd7f10877c6075ad0eb60070507c43cb502785e32a388aee5541312dc5e1e34a. Jun 20 19:16:58.143493 systemd[1]: Started cri-containerd-6e41fc0cbde67b180537c5bf50c836b74de96af98152bfa77cbaec8f664c49d2.scope - libcontainer container 6e41fc0cbde67b180537c5bf50c836b74de96af98152bfa77cbaec8f664c49d2. Jun 20 19:16:58.198319 containerd[1730]: time="2025-06-20T19:16:58.198277166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n28mf,Uid:a61266b8-f34d-4462-95fd-b7118b483dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd7f10877c6075ad0eb60070507c43cb502785e32a388aee5541312dc5e1e34a\"" Jun 20 19:16:58.201251 containerd[1730]: time="2025-06-20T19:16:58.201215778Z" level=info msg="CreateContainer within sandbox \"cd7f10877c6075ad0eb60070507c43cb502785e32a388aee5541312dc5e1e34a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:16:58.202138 containerd[1730]: time="2025-06-20T19:16:58.202117819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-94qdh,Uid:b592f86f-44fc-4c77-a590-58750385d672,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e41fc0cbde67b180537c5bf50c836b74de96af98152bfa77cbaec8f664c49d2\"" Jun 20 19:16:58.218960 containerd[1730]: time="2025-06-20T19:16:58.218931363Z" level=info msg="CreateContainer within sandbox \"6e41fc0cbde67b180537c5bf50c836b74de96af98152bfa77cbaec8f664c49d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:16:58.223180 containerd[1730]: time="2025-06-20T19:16:58.222257899Z" level=info msg="Container 20edcfaa3348880e30dc66ef674e79f59116c9e3c4a60e0d47e695cd6535a715: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:58.240860 containerd[1730]: time="2025-06-20T19:16:58.240834175Z" level=info msg="CreateContainer within sandbox \"cd7f10877c6075ad0eb60070507c43cb502785e32a388aee5541312dc5e1e34a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20edcfaa3348880e30dc66ef674e79f59116c9e3c4a60e0d47e695cd6535a715\"" Jun 20 19:16:58.241498 containerd[1730]: time="2025-06-20T19:16:58.241424500Z" level=info msg="StartContainer for \"20edcfaa3348880e30dc66ef674e79f59116c9e3c4a60e0d47e695cd6535a715\"" Jun 20 19:16:58.242564 containerd[1730]: time="2025-06-20T19:16:58.242512657Z" level=info msg="connecting to shim 20edcfaa3348880e30dc66ef674e79f59116c9e3c4a60e0d47e695cd6535a715" address="unix:///run/containerd/s/82a387d53520f1b6eeaa74996de19a339a4353faa8591d82f39947177af15fcd" protocol=ttrpc version=3 Jun 20 19:16:58.255469 containerd[1730]: time="2025-06-20T19:16:58.254501853Z" level=info msg="Container 65b1d70ae42c96e0dcd7367165199436497da2fe3d34a4693fabe53b2e90d0c0: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:58.259592 systemd[1]: Started cri-containerd-20edcfaa3348880e30dc66ef674e79f59116c9e3c4a60e0d47e695cd6535a715.scope - libcontainer container 20edcfaa3348880e30dc66ef674e79f59116c9e3c4a60e0d47e695cd6535a715. Jun 20 19:16:58.282087 containerd[1730]: time="2025-06-20T19:16:58.282039562Z" level=info msg="CreateContainer within sandbox \"6e41fc0cbde67b180537c5bf50c836b74de96af98152bfa77cbaec8f664c49d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65b1d70ae42c96e0dcd7367165199436497da2fe3d34a4693fabe53b2e90d0c0\"" Jun 20 19:16:58.283267 containerd[1730]: time="2025-06-20T19:16:58.283117760Z" level=info msg="StartContainer for \"65b1d70ae42c96e0dcd7367165199436497da2fe3d34a4693fabe53b2e90d0c0\"" Jun 20 19:16:58.284681 containerd[1730]: time="2025-06-20T19:16:58.284609031Z" level=info msg="connecting to shim 65b1d70ae42c96e0dcd7367165199436497da2fe3d34a4693fabe53b2e90d0c0" address="unix:///run/containerd/s/ddea063cd27be2669e34001f15a9068ac8bf369b6ef1d45be9b048904d64ba68" protocol=ttrpc version=3 Jun 20 19:16:58.289285 containerd[1730]: time="2025-06-20T19:16:58.289213428Z" level=info msg="StartContainer for \"20edcfaa3348880e30dc66ef674e79f59116c9e3c4a60e0d47e695cd6535a715\" returns successfully" Jun 20 19:16:58.309634 systemd[1]: Started cri-containerd-65b1d70ae42c96e0dcd7367165199436497da2fe3d34a4693fabe53b2e90d0c0.scope - libcontainer container 65b1d70ae42c96e0dcd7367165199436497da2fe3d34a4693fabe53b2e90d0c0. Jun 20 19:16:58.351553 containerd[1730]: time="2025-06-20T19:16:58.351419026Z" level=info msg="StartContainer for \"65b1d70ae42c96e0dcd7367165199436497da2fe3d34a4693fabe53b2e90d0c0\" returns successfully" Jun 20 19:16:58.442288 kubelet[3138]: I0620 19:16:58.442229 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-94qdh" podStartSLOduration=38.442210946 podStartE2EDuration="38.442210946s" podCreationTimestamp="2025-06-20 19:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:58.441741057 +0000 UTC m=+43.278047619" watchObservedRunningTime="2025-06-20 19:16:58.442210946 +0000 UTC m=+43.278517507" Jun 20 19:16:58.499023 kubelet[3138]: I0620 19:16:58.498962 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n28mf" podStartSLOduration=38.498940403 podStartE2EDuration="38.498940403s" podCreationTimestamp="2025-06-20 19:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:58.464103397 +0000 UTC m=+43.300409961" watchObservedRunningTime="2025-06-20 19:16:58.498940403 +0000 UTC m=+43.335246967" Jun 20 19:17:53.639617 systemd[1]: Started sshd@7-10.200.4.21:22-10.200.16.10:33698.service - OpenSSH per-connection server daemon (10.200.16.10:33698). Jun 20 19:17:54.237470 sshd[4446]: Accepted publickey for core from 10.200.16.10 port 33698 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:54.238757 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:54.242602 systemd-logind[1706]: New session 10 of user core. Jun 20 19:17:54.247626 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:17:54.731927 sshd[4448]: Connection closed by 10.200.16.10 port 33698 Jun 20 19:17:54.732557 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:54.735771 systemd[1]: sshd@7-10.200.4.21:22-10.200.16.10:33698.service: Deactivated successfully. Jun 20 19:17:54.738327 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:17:54.739893 systemd-logind[1706]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:17:54.742049 systemd-logind[1706]: Removed session 10. Jun 20 19:17:59.840648 systemd[1]: Started sshd@8-10.200.4.21:22-10.200.16.10:44562.service - OpenSSH per-connection server daemon (10.200.16.10:44562). Jun 20 19:18:00.435666 sshd[4462]: Accepted publickey for core from 10.200.16.10 port 44562 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:00.436868 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:00.441411 systemd-logind[1706]: New session 11 of user core. Jun 20 19:18:00.445610 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:18:00.902254 sshd[4464]: Connection closed by 10.200.16.10 port 44562 Jun 20 19:18:00.904375 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:00.907515 systemd[1]: sshd@8-10.200.4.21:22-10.200.16.10:44562.service: Deactivated successfully. Jun 20 19:18:00.909209 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:18:00.910380 systemd-logind[1706]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:18:00.911460 systemd-logind[1706]: Removed session 11. Jun 20 19:18:06.009671 systemd[1]: Started sshd@9-10.200.4.21:22-10.200.16.10:44574.service - OpenSSH per-connection server daemon (10.200.16.10:44574). Jun 20 19:18:06.605563 sshd[4476]: Accepted publickey for core from 10.200.16.10 port 44574 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:06.606745 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:06.611091 systemd-logind[1706]: New session 12 of user core. Jun 20 19:18:06.615615 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:18:07.072246 sshd[4478]: Connection closed by 10.200.16.10 port 44574 Jun 20 19:18:07.072889 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:07.075565 systemd[1]: sshd@9-10.200.4.21:22-10.200.16.10:44574.service: Deactivated successfully. Jun 20 19:18:07.079191 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:18:07.082722 systemd-logind[1706]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:18:07.083787 systemd-logind[1706]: Removed session 12. Jun 20 19:18:12.184914 systemd[1]: Started sshd@10-10.200.4.21:22-10.200.16.10:47042.service - OpenSSH per-connection server daemon (10.200.16.10:47042). Jun 20 19:18:12.779873 sshd[4492]: Accepted publickey for core from 10.200.16.10 port 47042 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:12.781087 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:12.785576 systemd-logind[1706]: New session 13 of user core. Jun 20 19:18:12.789616 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:18:13.254296 sshd[4494]: Connection closed by 10.200.16.10 port 47042 Jun 20 19:18:13.254927 sshd-session[4492]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:13.257825 systemd[1]: sshd@10-10.200.4.21:22-10.200.16.10:47042.service: Deactivated successfully. Jun 20 19:18:13.259630 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:18:13.261279 systemd-logind[1706]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:18:13.262982 systemd-logind[1706]: Removed session 13. Jun 20 19:18:13.358788 systemd[1]: Started sshd@11-10.200.4.21:22-10.200.16.10:47046.service - OpenSSH per-connection server daemon (10.200.16.10:47046). Jun 20 19:18:13.949685 sshd[4507]: Accepted publickey for core from 10.200.16.10 port 47046 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:13.950876 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:13.955767 systemd-logind[1706]: New session 14 of user core. Jun 20 19:18:13.965589 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:18:14.443725 sshd[4509]: Connection closed by 10.200.16.10 port 47046 Jun 20 19:18:14.444351 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:14.446963 systemd[1]: sshd@11-10.200.4.21:22-10.200.16.10:47046.service: Deactivated successfully. Jun 20 19:18:14.448947 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:18:14.450592 systemd-logind[1706]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:18:14.451904 systemd-logind[1706]: Removed session 14. Jun 20 19:18:14.547762 systemd[1]: Started sshd@12-10.200.4.21:22-10.200.16.10:47054.service - OpenSSH per-connection server daemon (10.200.16.10:47054). Jun 20 19:18:15.141875 sshd[4519]: Accepted publickey for core from 10.200.16.10 port 47054 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:15.143082 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:15.147731 systemd-logind[1706]: New session 15 of user core. Jun 20 19:18:15.151607 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:18:15.614962 sshd[4521]: Connection closed by 10.200.16.10 port 47054 Jun 20 19:18:15.615639 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:15.618445 systemd[1]: sshd@12-10.200.4.21:22-10.200.16.10:47054.service: Deactivated successfully. Jun 20 19:18:15.620618 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:18:15.622734 systemd-logind[1706]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:18:15.623596 systemd-logind[1706]: Removed session 15. Jun 20 19:18:20.723727 systemd[1]: Started sshd@13-10.200.4.21:22-10.200.16.10:42868.service - OpenSSH per-connection server daemon (10.200.16.10:42868). Jun 20 19:18:21.317359 sshd[4535]: Accepted publickey for core from 10.200.16.10 port 42868 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:21.318613 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:21.322519 systemd-logind[1706]: New session 16 of user core. Jun 20 19:18:21.332634 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:18:21.786977 sshd[4539]: Connection closed by 10.200.16.10 port 42868 Jun 20 19:18:21.787633 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:21.791014 systemd[1]: sshd@13-10.200.4.21:22-10.200.16.10:42868.service: Deactivated successfully. Jun 20 19:18:21.792910 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:18:21.793699 systemd-logind[1706]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:18:21.794944 systemd-logind[1706]: Removed session 16. Jun 20 19:18:21.895545 systemd[1]: Started sshd@14-10.200.4.21:22-10.200.16.10:42882.service - OpenSSH per-connection server daemon (10.200.16.10:42882). Jun 20 19:18:22.484667 sshd[4551]: Accepted publickey for core from 10.200.16.10 port 42882 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:22.485829 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:22.490536 systemd-logind[1706]: New session 17 of user core. Jun 20 19:18:22.499603 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:18:23.080971 sshd[4553]: Connection closed by 10.200.16.10 port 42882 Jun 20 19:18:23.081612 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:23.085466 systemd-logind[1706]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:18:23.087654 systemd[1]: sshd@14-10.200.4.21:22-10.200.16.10:42882.service: Deactivated successfully. Jun 20 19:18:23.089564 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:18:23.091537 systemd-logind[1706]: Removed session 17. Jun 20 19:18:23.188209 systemd[1]: Started sshd@15-10.200.4.21:22-10.200.16.10:42898.service - OpenSSH per-connection server daemon (10.200.16.10:42898). Jun 20 19:18:23.791291 sshd[4563]: Accepted publickey for core from 10.200.16.10 port 42898 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:23.792416 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:23.796841 systemd-logind[1706]: New session 18 of user core. Jun 20 19:18:23.800625 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:18:25.478238 sshd[4565]: Connection closed by 10.200.16.10 port 42898 Jun 20 19:18:25.478874 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:25.482318 systemd[1]: sshd@15-10.200.4.21:22-10.200.16.10:42898.service: Deactivated successfully. Jun 20 19:18:25.484094 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:18:25.484917 systemd-logind[1706]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:18:25.486513 systemd-logind[1706]: Removed session 18. Jun 20 19:18:25.587711 systemd[1]: Started sshd@16-10.200.4.21:22-10.200.16.10:42906.service - OpenSSH per-connection server daemon (10.200.16.10:42906). Jun 20 19:18:26.184890 sshd[4582]: Accepted publickey for core from 10.200.16.10 port 42906 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:26.186246 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:26.190771 systemd-logind[1706]: New session 19 of user core. Jun 20 19:18:26.196609 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:18:26.744869 sshd[4584]: Connection closed by 10.200.16.10 port 42906 Jun 20 19:18:26.745868 sshd-session[4582]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:26.749200 systemd[1]: sshd@16-10.200.4.21:22-10.200.16.10:42906.service: Deactivated successfully. Jun 20 19:18:26.751004 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:18:26.751786 systemd-logind[1706]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:18:26.753461 systemd-logind[1706]: Removed session 19. Jun 20 19:18:26.854993 systemd[1]: Started sshd@17-10.200.4.21:22-10.200.16.10:42922.service - OpenSSH per-connection server daemon (10.200.16.10:42922). Jun 20 19:18:27.452946 sshd[4594]: Accepted publickey for core from 10.200.16.10 port 42922 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:27.454252 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:27.459627 systemd-logind[1706]: New session 20 of user core. Jun 20 19:18:27.463626 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:18:27.920097 sshd[4596]: Connection closed by 10.200.16.10 port 42922 Jun 20 19:18:27.920652 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:27.923643 systemd[1]: sshd@17-10.200.4.21:22-10.200.16.10:42922.service: Deactivated successfully. Jun 20 19:18:27.925653 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:18:27.927091 systemd-logind[1706]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:18:27.928701 systemd-logind[1706]: Removed session 20. Jun 20 19:18:33.025604 systemd[1]: Started sshd@18-10.200.4.21:22-10.200.16.10:37408.service - OpenSSH per-connection server daemon (10.200.16.10:37408). Jun 20 19:18:33.614144 sshd[4612]: Accepted publickey for core from 10.200.16.10 port 37408 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:33.615325 sshd-session[4612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:33.619789 systemd-logind[1706]: New session 21 of user core. Jun 20 19:18:33.623589 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:18:34.074659 sshd[4614]: Connection closed by 10.200.16.10 port 37408 Jun 20 19:18:34.075269 sshd-session[4612]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:34.077972 systemd[1]: sshd@18-10.200.4.21:22-10.200.16.10:37408.service: Deactivated successfully. Jun 20 19:18:34.080015 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:18:34.082466 systemd-logind[1706]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:18:34.083739 systemd-logind[1706]: Removed session 21. Jun 20 19:18:39.182887 systemd[1]: Started sshd@19-10.200.4.21:22-10.200.16.10:50788.service - OpenSSH per-connection server daemon (10.200.16.10:50788). Jun 20 19:18:39.771781 sshd[4626]: Accepted publickey for core from 10.200.16.10 port 50788 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:39.772960 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:39.777412 systemd-logind[1706]: New session 22 of user core. Jun 20 19:18:39.788616 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:18:40.242109 sshd[4628]: Connection closed by 10.200.16.10 port 50788 Jun 20 19:18:40.242830 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:40.246251 systemd[1]: sshd@19-10.200.4.21:22-10.200.16.10:50788.service: Deactivated successfully. Jun 20 19:18:40.248634 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:18:40.249364 systemd-logind[1706]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:18:40.250754 systemd-logind[1706]: Removed session 22. Jun 20 19:18:45.351829 systemd[1]: Started sshd@20-10.200.4.21:22-10.200.16.10:50794.service - OpenSSH per-connection server daemon (10.200.16.10:50794). Jun 20 19:18:45.952066 sshd[4640]: Accepted publickey for core from 10.200.16.10 port 50794 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:45.953251 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:45.957826 systemd-logind[1706]: New session 23 of user core. Jun 20 19:18:45.965599 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:18:46.421751 sshd[4642]: Connection closed by 10.200.16.10 port 50794 Jun 20 19:18:46.422365 sshd-session[4640]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:46.425749 systemd[1]: sshd@20-10.200.4.21:22-10.200.16.10:50794.service: Deactivated successfully. Jun 20 19:18:46.427680 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:18:46.428726 systemd-logind[1706]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:18:46.430097 systemd-logind[1706]: Removed session 23. Jun 20 19:18:46.531829 systemd[1]: Started sshd@21-10.200.4.21:22-10.200.16.10:50798.service - OpenSSH per-connection server daemon (10.200.16.10:50798). Jun 20 19:18:47.120790 sshd[4654]: Accepted publickey for core from 10.200.16.10 port 50798 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:47.122034 sshd-session[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:47.126559 systemd-logind[1706]: New session 24 of user core. Jun 20 19:18:47.133603 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:18:48.732392 containerd[1730]: time="2025-06-20T19:18:48.731543257Z" level=info msg="StopContainer for \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" with timeout 30 (s)" Jun 20 19:18:48.733571 containerd[1730]: time="2025-06-20T19:18:48.733544346Z" level=info msg="Stop container \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" with signal terminated" Jun 20 19:18:48.744321 systemd[1]: cri-containerd-bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041.scope: Deactivated successfully. Jun 20 19:18:48.746615 containerd[1730]: time="2025-06-20T19:18:48.746584017Z" level=info msg="received exit event container_id:\"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" id:\"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" pid:3659 exited_at:{seconds:1750447128 nanos:746311901}" Jun 20 19:18:48.747328 containerd[1730]: time="2025-06-20T19:18:48.747307653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" id:\"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" pid:3659 exited_at:{seconds:1750447128 nanos:746311901}" Jun 20 19:18:48.752619 containerd[1730]: time="2025-06-20T19:18:48.752593220Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:18:48.758049 containerd[1730]: time="2025-06-20T19:18:48.757982390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" id:\"5e2796074297120b08fb238e85e070fa69b24e196ab8e6660e6a40a062d95c1a\" pid:4681 exited_at:{seconds:1750447128 nanos:757704080}" Jun 20 19:18:48.760173 containerd[1730]: time="2025-06-20T19:18:48.760154429Z" level=info msg="StopContainer for \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" with timeout 2 (s)" Jun 20 19:18:48.760916 containerd[1730]: time="2025-06-20T19:18:48.760885035Z" level=info msg="Stop container \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" with signal terminated" Jun 20 19:18:48.768277 systemd-networkd[1368]: lxc_health: Link DOWN Jun 20 19:18:48.768285 systemd-networkd[1368]: lxc_health: Lost carrier Jun 20 19:18:48.780449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041-rootfs.mount: Deactivated successfully. Jun 20 19:18:48.786866 systemd[1]: cri-containerd-8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3.scope: Deactivated successfully. Jun 20 19:18:48.788448 containerd[1730]: time="2025-06-20T19:18:48.788192884Z" level=info msg="received exit event container_id:\"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" id:\"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" pid:3771 exited_at:{seconds:1750447128 nanos:787866541}" Jun 20 19:18:48.787410 systemd[1]: cri-containerd-8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3.scope: Consumed 5.458s CPU time, 124.1M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 19:18:48.788742 containerd[1730]: time="2025-06-20T19:18:48.788198697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" id:\"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" pid:3771 exited_at:{seconds:1750447128 nanos:787866541}" Jun 20 19:18:48.804089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3-rootfs.mount: Deactivated successfully. Jun 20 19:18:48.838908 containerd[1730]: time="2025-06-20T19:18:48.838786404Z" level=info msg="StopContainer for \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" returns successfully" Jun 20 19:18:48.839202 containerd[1730]: time="2025-06-20T19:18:48.839183986Z" level=info msg="StopContainer for \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" returns successfully" Jun 20 19:18:48.839958 containerd[1730]: time="2025-06-20T19:18:48.839934834Z" level=info msg="StopPodSandbox for \"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\"" Jun 20 19:18:48.840043 containerd[1730]: time="2025-06-20T19:18:48.840022151Z" level=info msg="StopPodSandbox for \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\"" Jun 20 19:18:48.840072 containerd[1730]: time="2025-06-20T19:18:48.840054154Z" level=info msg="Container to stop \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:48.840072 containerd[1730]: time="2025-06-20T19:18:48.840065553Z" level=info msg="Container to stop \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:48.840128 containerd[1730]: time="2025-06-20T19:18:48.840074762Z" level=info msg="Container to stop \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:48.840128 containerd[1730]: time="2025-06-20T19:18:48.840083447Z" level=info msg="Container to stop \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:48.840128 containerd[1730]: time="2025-06-20T19:18:48.840091818Z" level=info msg="Container to stop \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:48.840392 containerd[1730]: time="2025-06-20T19:18:48.840227118Z" level=info msg="Container to stop \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:48.847599 systemd[1]: cri-containerd-7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd.scope: Deactivated successfully. Jun 20 19:18:48.849700 systemd[1]: cri-containerd-b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68.scope: Deactivated successfully. Jun 20 19:18:48.850635 containerd[1730]: time="2025-06-20T19:18:48.850213801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" id:\"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" pid:3292 exit_status:137 exited_at:{seconds:1750447128 nanos:849148825}" Jun 20 19:18:48.876887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68-rootfs.mount: Deactivated successfully. Jun 20 19:18:48.883803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd-rootfs.mount: Deactivated successfully. Jun 20 19:18:48.885399 containerd[1730]: time="2025-06-20T19:18:48.885350075Z" level=info msg="shim disconnected" id=b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68 namespace=k8s.io Jun 20 19:18:48.885399 containerd[1730]: time="2025-06-20T19:18:48.885385094Z" level=warning msg="cleaning up after shim disconnected" id=b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68 namespace=k8s.io Jun 20 19:18:48.886746 containerd[1730]: time="2025-06-20T19:18:48.885394530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:18:48.890048 containerd[1730]: time="2025-06-20T19:18:48.888230062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\" id:\"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\" pid:3370 exit_status:137 exited_at:{seconds:1750447128 nanos:852300496}" Jun 20 19:18:48.890048 containerd[1730]: time="2025-06-20T19:18:48.889624162Z" level=info msg="received exit event sandbox_id:\"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\" exit_status:137 exited_at:{seconds:1750447128 nanos:852300496}" Jun 20 19:18:48.890048 containerd[1730]: time="2025-06-20T19:18:48.889994347Z" level=info msg="received exit event sandbox_id:\"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" exit_status:137 exited_at:{seconds:1750447128 nanos:849148825}" Jun 20 19:18:48.890493 containerd[1730]: time="2025-06-20T19:18:48.890422647Z" level=info msg="TearDown network for sandbox \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" successfully" Jun 20 19:18:48.890493 containerd[1730]: time="2025-06-20T19:18:48.890458687Z" level=info msg="StopPodSandbox for \"7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd\" returns successfully" Jun 20 19:18:48.890719 containerd[1730]: time="2025-06-20T19:18:48.890707151Z" level=info msg="shim disconnected" id=7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd namespace=k8s.io Jun 20 19:18:48.890795 containerd[1730]: time="2025-06-20T19:18:48.890784390Z" level=warning msg="cleaning up after shim disconnected" id=7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd namespace=k8s.io Jun 20 19:18:48.890926 containerd[1730]: time="2025-06-20T19:18:48.890826208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:18:48.891495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7363034ec83eccee4886789c1b3e80fd7de5b5be0be1bec732f3a0ef930385fd-shm.mount: Deactivated successfully. Jun 20 19:18:48.891692 containerd[1730]: time="2025-06-20T19:18:48.891668436Z" level=info msg="TearDown network for sandbox \"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\" successfully" Jun 20 19:18:48.891874 containerd[1730]: time="2025-06-20T19:18:48.891862713Z" level=info msg="StopPodSandbox for \"b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68\" returns successfully" Jun 20 19:18:49.084103 kubelet[3138]: I0620 19:18:49.083981 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-host-proc-sys-net\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.085613 kubelet[3138]: I0620 19:18:49.084618 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-run\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.085613 kubelet[3138]: I0620 19:18:49.084649 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cni-path\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.085613 kubelet[3138]: I0620 19:18:49.084673 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-config-path\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.085613 kubelet[3138]: I0620 19:18:49.084691 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-etc-cni-netd\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.085613 kubelet[3138]: I0620 19:18:49.084711 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-hubble-tls\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.085613 kubelet[3138]: I0620 19:18:49.084727 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-bpf-maps\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.086122 kubelet[3138]: I0620 19:18:49.084747 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-lib-modules\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.086122 kubelet[3138]: I0620 19:18:49.084763 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-host-proc-sys-kernel\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.086122 kubelet[3138]: I0620 19:18:49.084781 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9x7w\" (UniqueName: \"kubernetes.io/projected/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-kube-api-access-q9x7w\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.086122 kubelet[3138]: I0620 19:18:49.084799 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kv9w\" (UniqueName: \"kubernetes.io/projected/3af6c32a-b96f-4c57-b297-8fcd646d4e3f-kube-api-access-6kv9w\") pod \"3af6c32a-b96f-4c57-b297-8fcd646d4e3f\" (UID: \"3af6c32a-b96f-4c57-b297-8fcd646d4e3f\") " Jun 20 19:18:49.086122 kubelet[3138]: I0620 19:18:49.084818 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-clustermesh-secrets\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.086122 kubelet[3138]: I0620 19:18:49.084834 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-hostproc\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.086495 kubelet[3138]: I0620 19:18:49.084849 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-cgroup\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.086495 kubelet[3138]: I0620 19:18:49.084867 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-xtables-lock\") pod \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\" (UID: \"029e13ff-af0a-43fc-a0d3-2fb127fc90fa\") " Jun 20 19:18:49.086495 kubelet[3138]: I0620 19:18:49.084886 3138 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3af6c32a-b96f-4c57-b297-8fcd646d4e3f-cilium-config-path\") pod \"3af6c32a-b96f-4c57-b297-8fcd646d4e3f\" (UID: \"3af6c32a-b96f-4c57-b297-8fcd646d4e3f\") " Jun 20 19:18:49.086495 kubelet[3138]: I0620 19:18:49.084131 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.086495 kubelet[3138]: I0620 19:18:49.085527 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.086982 kubelet[3138]: I0620 19:18:49.085561 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.086982 kubelet[3138]: I0620 19:18:49.085574 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cni-path" (OuterVolumeSpecName: "cni-path") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.086982 kubelet[3138]: I0620 19:18:49.086584 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.087194 kubelet[3138]: I0620 19:18:49.087173 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.087509 kubelet[3138]: I0620 19:18:49.087488 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.088065 kubelet[3138]: I0620 19:18:49.088034 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.089997 kubelet[3138]: I0620 19:18:49.089970 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.092606 kubelet[3138]: I0620 19:18:49.092581 3138 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-host-proc-sys-net\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092690 kubelet[3138]: I0620 19:18:49.092615 3138 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-run\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092690 kubelet[3138]: I0620 19:18:49.092627 3138 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cni-path\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092690 kubelet[3138]: I0620 19:18:49.092636 3138 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-etc-cni-netd\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092690 kubelet[3138]: I0620 19:18:49.092644 3138 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-bpf-maps\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092690 kubelet[3138]: I0620 19:18:49.092655 3138 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-lib-modules\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092690 kubelet[3138]: I0620 19:18:49.092664 3138 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-host-proc-sys-kernel\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092690 kubelet[3138]: I0620 19:18:49.092689 3138 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-cgroup\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092940 kubelet[3138]: I0620 19:18:49.092700 3138 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-xtables-lock\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.092940 kubelet[3138]: I0620 19:18:49.092585 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-hostproc" (OuterVolumeSpecName: "hostproc") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:18:49.092940 kubelet[3138]: I0620 19:18:49.092786 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af6c32a-b96f-4c57-b297-8fcd646d4e3f-kube-api-access-6kv9w" (OuterVolumeSpecName: "kube-api-access-6kv9w") pod "3af6c32a-b96f-4c57-b297-8fcd646d4e3f" (UID: "3af6c32a-b96f-4c57-b297-8fcd646d4e3f"). InnerVolumeSpecName "kube-api-access-6kv9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:18:49.092940 kubelet[3138]: I0620 19:18:49.092846 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:18:49.093033 kubelet[3138]: I0620 19:18:49.093016 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-kube-api-access-q9x7w" (OuterVolumeSpecName: "kube-api-access-q9x7w") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "kube-api-access-q9x7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:18:49.093357 kubelet[3138]: I0620 19:18:49.093345 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:18:49.094360 kubelet[3138]: I0620 19:18:49.094323 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3af6c32a-b96f-4c57-b297-8fcd646d4e3f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3af6c32a-b96f-4c57-b297-8fcd646d4e3f" (UID: "3af6c32a-b96f-4c57-b297-8fcd646d4e3f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:18:49.094700 kubelet[3138]: I0620 19:18:49.094680 3138 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "029e13ff-af0a-43fc-a0d3-2fb127fc90fa" (UID: "029e13ff-af0a-43fc-a0d3-2fb127fc90fa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:18:49.193061 kubelet[3138]: I0620 19:18:49.193022 3138 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kv9w\" (UniqueName: \"kubernetes.io/projected/3af6c32a-b96f-4c57-b297-8fcd646d4e3f-kube-api-access-6kv9w\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.193061 kubelet[3138]: I0620 19:18:49.193055 3138 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3af6c32a-b96f-4c57-b297-8fcd646d4e3f-cilium-config-path\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.193061 kubelet[3138]: I0620 19:18:49.193066 3138 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-clustermesh-secrets\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.193061 kubelet[3138]: I0620 19:18:49.193076 3138 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-hostproc\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.193061 kubelet[3138]: I0620 19:18:49.193084 3138 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-cilium-config-path\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.193307 kubelet[3138]: I0620 19:18:49.193118 3138 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-hubble-tls\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.193307 kubelet[3138]: I0620 19:18:49.193130 3138 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9x7w\" (UniqueName: \"kubernetes.io/projected/029e13ff-af0a-43fc-a0d3-2fb127fc90fa-kube-api-access-q9x7w\") on node \"ci-4344.1.0-a-a13bac16ef\" DevicePath \"\"" Jun 20 19:18:49.286591 systemd[1]: Removed slice kubepods-besteffort-pod3af6c32a_b96f_4c57_b297_8fcd646d4e3f.slice - libcontainer container kubepods-besteffort-pod3af6c32a_b96f_4c57_b297_8fcd646d4e3f.slice. Jun 20 19:18:49.288921 systemd[1]: Removed slice kubepods-burstable-pod029e13ff_af0a_43fc_a0d3_2fb127fc90fa.slice - libcontainer container kubepods-burstable-pod029e13ff_af0a_43fc_a0d3_2fb127fc90fa.slice. Jun 20 19:18:49.289011 systemd[1]: kubepods-burstable-pod029e13ff_af0a_43fc_a0d3_2fb127fc90fa.slice: Consumed 5.535s CPU time, 124.6M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 19:18:49.628927 kubelet[3138]: I0620 19:18:49.628612 3138 scope.go:117] "RemoveContainer" containerID="bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041" Jun 20 19:18:49.631253 containerd[1730]: time="2025-06-20T19:18:49.631121497Z" level=info msg="RemoveContainer for \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\"" Jun 20 19:18:49.644128 containerd[1730]: time="2025-06-20T19:18:49.644073136Z" level=info msg="RemoveContainer for \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" returns successfully" Jun 20 19:18:49.644627 kubelet[3138]: I0620 19:18:49.644333 3138 scope.go:117] "RemoveContainer" containerID="bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041" Jun 20 19:18:49.645105 containerd[1730]: time="2025-06-20T19:18:49.644902581Z" level=error msg="ContainerStatus for \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\": not found" Jun 20 19:18:49.645523 kubelet[3138]: E0620 19:18:49.645465 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\": not found" containerID="bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041" Jun 20 19:18:49.645618 kubelet[3138]: I0620 19:18:49.645498 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041"} err="failed to get container status \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcc97cd917d8c09612b2d6c0415dc4e6dee7f9206c6cf6af4cfd7bd82e24c041\": not found" Jun 20 19:18:49.645652 kubelet[3138]: I0620 19:18:49.645620 3138 scope.go:117] "RemoveContainer" containerID="8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3" Jun 20 19:18:49.648363 containerd[1730]: time="2025-06-20T19:18:49.648252119Z" level=info msg="RemoveContainer for \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\"" Jun 20 19:18:49.656051 containerd[1730]: time="2025-06-20T19:18:49.656020303Z" level=info msg="RemoveContainer for \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" returns successfully" Jun 20 19:18:49.656257 kubelet[3138]: I0620 19:18:49.656231 3138 scope.go:117] "RemoveContainer" containerID="2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c" Jun 20 19:18:49.657481 containerd[1730]: time="2025-06-20T19:18:49.657453515Z" level=info msg="RemoveContainer for \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\"" Jun 20 19:18:49.665207 containerd[1730]: time="2025-06-20T19:18:49.665168781Z" level=info msg="RemoveContainer for \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\" returns successfully" Jun 20 19:18:49.665359 kubelet[3138]: I0620 19:18:49.665340 3138 scope.go:117] "RemoveContainer" containerID="b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a" Jun 20 19:18:49.668787 containerd[1730]: time="2025-06-20T19:18:49.668754558Z" level=info msg="RemoveContainer for \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\"" Jun 20 19:18:49.676002 containerd[1730]: time="2025-06-20T19:18:49.675959391Z" level=info msg="RemoveContainer for \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\" returns successfully" Jun 20 19:18:49.676159 kubelet[3138]: I0620 19:18:49.676146 3138 scope.go:117] "RemoveContainer" containerID="88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0" Jun 20 19:18:49.677528 containerd[1730]: time="2025-06-20T19:18:49.677502325Z" level=info msg="RemoveContainer for \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\"" Jun 20 19:18:49.683873 containerd[1730]: time="2025-06-20T19:18:49.683833733Z" level=info msg="RemoveContainer for \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\" returns successfully" Jun 20 19:18:49.684062 kubelet[3138]: I0620 19:18:49.684024 3138 scope.go:117] "RemoveContainer" containerID="5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9" Jun 20 19:18:49.685349 containerd[1730]: time="2025-06-20T19:18:49.685313983Z" level=info msg="RemoveContainer for \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\"" Jun 20 19:18:49.691775 containerd[1730]: time="2025-06-20T19:18:49.691738727Z" level=info msg="RemoveContainer for \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" returns successfully" Jun 20 19:18:49.691966 kubelet[3138]: I0620 19:18:49.691951 3138 scope.go:117] "RemoveContainer" containerID="8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3" Jun 20 19:18:49.692158 containerd[1730]: time="2025-06-20T19:18:49.692125946Z" level=error msg="ContainerStatus for \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\": not found" Jun 20 19:18:49.692294 kubelet[3138]: E0620 19:18:49.692267 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\": not found" containerID="8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3" Jun 20 19:18:49.692329 kubelet[3138]: I0620 19:18:49.692301 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3"} err="failed to get container status \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ff794544d1dfe2f9e44941bc629f2916fd76d0517f5bbf9169926ac903649f3\": not found" Jun 20 19:18:49.692329 kubelet[3138]: I0620 19:18:49.692325 3138 scope.go:117] "RemoveContainer" containerID="2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c" Jun 20 19:18:49.692521 containerd[1730]: time="2025-06-20T19:18:49.692492507Z" level=error msg="ContainerStatus for \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\": not found" Jun 20 19:18:49.692605 kubelet[3138]: E0620 19:18:49.692585 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\": not found" containerID="2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c" Jun 20 19:18:49.692637 kubelet[3138]: I0620 19:18:49.692604 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c"} err="failed to get container status \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d2e6af9947e710afad74b1c0d2309cbe84b2ce8ce78de65290fe84f1872067c\": not found" Jun 20 19:18:49.692637 kubelet[3138]: I0620 19:18:49.692621 3138 scope.go:117] "RemoveContainer" containerID="b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a" Jun 20 19:18:49.692763 containerd[1730]: time="2025-06-20T19:18:49.692741891Z" level=error msg="ContainerStatus for \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\": not found" Jun 20 19:18:49.692854 kubelet[3138]: E0620 19:18:49.692835 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\": not found" containerID="b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a" Jun 20 19:18:49.692895 kubelet[3138]: I0620 19:18:49.692853 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a"} err="failed to get container status \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b39807cc83052db84ed70102dcc118b7c5c30853bd2f86cd8a9b417f1b64fd7a\": not found" Jun 20 19:18:49.692895 kubelet[3138]: I0620 19:18:49.692870 3138 scope.go:117] "RemoveContainer" containerID="88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0" Jun 20 19:18:49.693031 containerd[1730]: time="2025-06-20T19:18:49.692992888Z" level=error msg="ContainerStatus for \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\": not found" Jun 20 19:18:49.693123 kubelet[3138]: E0620 19:18:49.693105 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\": not found" containerID="88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0" Jun 20 19:18:49.693152 kubelet[3138]: I0620 19:18:49.693123 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0"} err="failed to get container status \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"88808e95988e852f8d097cf55f6436634cc53e9cb6de50d4b9491b5d698ed1d0\": not found" Jun 20 19:18:49.693152 kubelet[3138]: I0620 19:18:49.693138 3138 scope.go:117] "RemoveContainer" containerID="5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9" Jun 20 19:18:49.693304 containerd[1730]: time="2025-06-20T19:18:49.693274174Z" level=error msg="ContainerStatus for \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\": not found" Jun 20 19:18:49.693379 kubelet[3138]: E0620 19:18:49.693361 3138 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\": not found" containerID="5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9" Jun 20 19:18:49.693406 kubelet[3138]: I0620 19:18:49.693377 3138 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9"} err="failed to get container status \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f42c2c7e585a9314e74c008c5e3f649e93a00256ae071c8cc6b8ef9f6134da9\": not found" Jun 20 19:18:49.779217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b02cf2ae1990a0187f0f3d6b0aa64cd7283ab3fc81c67af67fb23b4462fb1b68-shm.mount: Deactivated successfully. Jun 20 19:18:49.779323 systemd[1]: var-lib-kubelet-pods-3af6c32a\x2db96f\x2d4c57\x2db297\x2d8fcd646d4e3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6kv9w.mount: Deactivated successfully. Jun 20 19:18:49.779387 systemd[1]: var-lib-kubelet-pods-029e13ff\x2daf0a\x2d43fc\x2da0d3\x2d2fb127fc90fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq9x7w.mount: Deactivated successfully. Jun 20 19:18:49.779557 systemd[1]: var-lib-kubelet-pods-029e13ff\x2daf0a\x2d43fc\x2da0d3\x2d2fb127fc90fa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:18:49.779631 systemd[1]: var-lib-kubelet-pods-029e13ff\x2daf0a\x2d43fc\x2da0d3\x2d2fb127fc90fa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:18:50.384592 kubelet[3138]: E0620 19:18:50.384552 3138 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:18:50.768656 sshd[4656]: Connection closed by 10.200.16.10 port 50798 Jun 20 19:18:50.769269 sshd-session[4654]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:50.772184 systemd[1]: sshd@21-10.200.4.21:22-10.200.16.10:50798.service: Deactivated successfully. Jun 20 19:18:50.774004 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:18:50.775758 systemd-logind[1706]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:18:50.776588 systemd-logind[1706]: Removed session 24. Jun 20 19:18:50.873754 systemd[1]: Started sshd@22-10.200.4.21:22-10.200.16.10:34694.service - OpenSSH per-connection server daemon (10.200.16.10:34694). Jun 20 19:18:51.281782 kubelet[3138]: I0620 19:18:51.281666 3138 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="029e13ff-af0a-43fc-a0d3-2fb127fc90fa" path="/var/lib/kubelet/pods/029e13ff-af0a-43fc-a0d3-2fb127fc90fa/volumes" Jun 20 19:18:51.282152 kubelet[3138]: I0620 19:18:51.282137 3138 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af6c32a-b96f-4c57-b297-8fcd646d4e3f" path="/var/lib/kubelet/pods/3af6c32a-b96f-4c57-b297-8fcd646d4e3f/volumes" Jun 20 19:18:51.467448 sshd[4804]: Accepted publickey for core from 10.200.16.10 port 34694 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:51.468701 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:51.473282 systemd-logind[1706]: New session 25 of user core. Jun 20 19:18:51.476598 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:18:52.275799 kubelet[3138]: E0620 19:18:52.275758 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="029e13ff-af0a-43fc-a0d3-2fb127fc90fa" containerName="clean-cilium-state" Jun 20 19:18:52.275799 kubelet[3138]: E0620 19:18:52.275798 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="029e13ff-af0a-43fc-a0d3-2fb127fc90fa" containerName="apply-sysctl-overwrites" Jun 20 19:18:52.275799 kubelet[3138]: E0620 19:18:52.275806 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="029e13ff-af0a-43fc-a0d3-2fb127fc90fa" containerName="cilium-agent" Jun 20 19:18:52.275799 kubelet[3138]: E0620 19:18:52.275814 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="029e13ff-af0a-43fc-a0d3-2fb127fc90fa" containerName="mount-cgroup" Jun 20 19:18:52.275799 kubelet[3138]: E0620 19:18:52.275820 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3af6c32a-b96f-4c57-b297-8fcd646d4e3f" containerName="cilium-operator" Jun 20 19:18:52.276343 kubelet[3138]: E0620 19:18:52.275827 3138 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="029e13ff-af0a-43fc-a0d3-2fb127fc90fa" containerName="mount-bpf-fs" Jun 20 19:18:52.276343 kubelet[3138]: I0620 19:18:52.275854 3138 memory_manager.go:354] "RemoveStaleState removing state" podUID="029e13ff-af0a-43fc-a0d3-2fb127fc90fa" containerName="cilium-agent" Jun 20 19:18:52.276343 kubelet[3138]: I0620 19:18:52.275862 3138 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af6c32a-b96f-4c57-b297-8fcd646d4e3f" containerName="cilium-operator" Jun 20 19:18:52.281455 kubelet[3138]: W0620 19:18:52.280460 3138 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4344.1.0-a-a13bac16ef" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.0-a-a13bac16ef' and this object Jun 20 19:18:52.281455 kubelet[3138]: E0620 19:18:52.280507 3138 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4344.1.0-a-a13bac16ef\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.0-a-a13bac16ef' and this object" logger="UnhandledError" Jun 20 19:18:52.288111 systemd[1]: Created slice kubepods-burstable-podbbae80cc_311f_4427_bbfe_ddb31f796385.slice - libcontainer container kubepods-burstable-podbbae80cc_311f_4427_bbfe_ddb31f796385.slice. Jun 20 19:18:52.292099 kubelet[3138]: W0620 19:18:52.291890 3138 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4344.1.0-a-a13bac16ef" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.0-a-a13bac16ef' and this object Jun 20 19:18:52.292099 kubelet[3138]: E0620 19:18:52.291931 3138 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4344.1.0-a-a13bac16ef\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.0-a-a13bac16ef' and this object" logger="UnhandledError" Jun 20 19:18:52.293332 kubelet[3138]: W0620 19:18:52.293278 3138 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4344.1.0-a-a13bac16ef" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.0-a-a13bac16ef' and this object Jun 20 19:18:52.293332 kubelet[3138]: E0620 19:18:52.293309 3138 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4344.1.0-a-a13bac16ef\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.0-a-a13bac16ef' and this object" logger="UnhandledError" Jun 20 19:18:52.312580 kubelet[3138]: I0620 19:18:52.312552 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-bpf-maps\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312764 kubelet[3138]: I0620 19:18:52.312584 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-cni-path\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312764 kubelet[3138]: I0620 19:18:52.312606 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-host-proc-sys-kernel\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312764 kubelet[3138]: I0620 19:18:52.312621 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-lib-modules\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312764 kubelet[3138]: I0620 19:18:52.312637 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbae80cc-311f-4427-bbfe-ddb31f796385-clustermesh-secrets\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312764 kubelet[3138]: I0620 19:18:52.312654 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-cilium-cgroup\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312764 kubelet[3138]: I0620 19:18:52.312670 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-etc-cni-netd\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312930 kubelet[3138]: I0620 19:18:52.312687 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwpc9\" (UniqueName: \"kubernetes.io/projected/bbae80cc-311f-4427-bbfe-ddb31f796385-kube-api-access-jwpc9\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312930 kubelet[3138]: I0620 19:18:52.312703 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-cilium-run\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312930 kubelet[3138]: I0620 19:18:52.312717 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-hostproc\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312930 kubelet[3138]: I0620 19:18:52.312732 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-xtables-lock\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312930 kubelet[3138]: I0620 19:18:52.312748 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbae80cc-311f-4427-bbfe-ddb31f796385-cilium-config-path\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.312930 kubelet[3138]: I0620 19:18:52.312761 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bbae80cc-311f-4427-bbfe-ddb31f796385-cilium-ipsec-secrets\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.313029 kubelet[3138]: I0620 19:18:52.312777 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbae80cc-311f-4427-bbfe-ddb31f796385-hubble-tls\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.313029 kubelet[3138]: I0620 19:18:52.312793 3138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbae80cc-311f-4427-bbfe-ddb31f796385-host-proc-sys-net\") pod \"cilium-tdmdf\" (UID: \"bbae80cc-311f-4427-bbfe-ddb31f796385\") " pod="kube-system/cilium-tdmdf" Jun 20 19:18:52.348820 sshd[4808]: Connection closed by 10.200.16.10 port 34694 Jun 20 19:18:52.349379 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:52.352155 systemd[1]: sshd@22-10.200.4.21:22-10.200.16.10:34694.service: Deactivated successfully. Jun 20 19:18:52.354007 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:18:52.354876 systemd-logind[1706]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:18:52.356385 systemd-logind[1706]: Removed session 25. Jun 20 19:18:52.461765 systemd[1]: Started sshd@23-10.200.4.21:22-10.200.16.10:34698.service - OpenSSH per-connection server daemon (10.200.16.10:34698). Jun 20 19:18:53.053370 sshd[4820]: Accepted publickey for core from 10.200.16.10 port 34698 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:53.054598 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:53.059110 systemd-logind[1706]: New session 26 of user core. Jun 20 19:18:53.065590 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:18:53.414428 kubelet[3138]: E0620 19:18:53.414304 3138 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jun 20 19:18:53.414428 kubelet[3138]: E0620 19:18:53.414421 3138 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbae80cc-311f-4427-bbfe-ddb31f796385-clustermesh-secrets podName:bbae80cc-311f-4427-bbfe-ddb31f796385 nodeName:}" failed. No retries permitted until 2025-06-20 19:18:53.914396302 +0000 UTC m=+158.750702858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/bbae80cc-311f-4427-bbfe-ddb31f796385-clustermesh-secrets") pod "cilium-tdmdf" (UID: "bbae80cc-311f-4427-bbfe-ddb31f796385") : failed to sync secret cache: timed out waiting for the condition Jun 20 19:18:53.414866 kubelet[3138]: E0620 19:18:53.414303 3138 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jun 20 19:18:53.414866 kubelet[3138]: E0620 19:18:53.414725 3138 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbae80cc-311f-4427-bbfe-ddb31f796385-cilium-ipsec-secrets podName:bbae80cc-311f-4427-bbfe-ddb31f796385 nodeName:}" failed. No retries permitted until 2025-06-20 19:18:53.914712705 +0000 UTC m=+158.751019274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/bbae80cc-311f-4427-bbfe-ddb31f796385-cilium-ipsec-secrets") pod "cilium-tdmdf" (UID: "bbae80cc-311f-4427-bbfe-ddb31f796385") : failed to sync secret cache: timed out waiting for the condition Jun 20 19:18:53.567155 sshd[4822]: Connection closed by 10.200.16.10 port 34698 Jun 20 19:18:53.567783 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:53.571098 systemd[1]: sshd@23-10.200.4.21:22-10.200.16.10:34698.service: Deactivated successfully. Jun 20 19:18:53.572861 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:18:53.574926 systemd-logind[1706]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:18:53.576321 systemd-logind[1706]: Removed session 26. Jun 20 19:18:53.673945 systemd[1]: Started sshd@24-10.200.4.21:22-10.200.16.10:34702.service - OpenSSH per-connection server daemon (10.200.16.10:34702). Jun 20 19:18:54.094198 containerd[1730]: time="2025-06-20T19:18:54.094084069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdmdf,Uid:bbae80cc-311f-4427-bbfe-ddb31f796385,Namespace:kube-system,Attempt:0,}" Jun 20 19:18:54.129057 containerd[1730]: time="2025-06-20T19:18:54.128937921Z" level=info msg="connecting to shim fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9" address="unix:///run/containerd/s/5565a8d8adfc6e5476a4f460f725358a59a0e16f1acaeb69544456657cc45ddb" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:54.152594 systemd[1]: Started cri-containerd-fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9.scope - libcontainer container fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9. Jun 20 19:18:54.177813 containerd[1730]: time="2025-06-20T19:18:54.177770136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdmdf,Uid:bbae80cc-311f-4427-bbfe-ddb31f796385,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\"" Jun 20 19:18:54.180266 containerd[1730]: time="2025-06-20T19:18:54.180232597Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:18:54.194642 containerd[1730]: time="2025-06-20T19:18:54.192736569Z" level=info msg="Container 8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:54.209059 containerd[1730]: time="2025-06-20T19:18:54.209027158Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d\"" Jun 20 19:18:54.209698 containerd[1730]: time="2025-06-20T19:18:54.209674319Z" level=info msg="StartContainer for \"8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d\"" Jun 20 19:18:54.210859 containerd[1730]: time="2025-06-20T19:18:54.210777055Z" level=info msg="connecting to shim 8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d" address="unix:///run/containerd/s/5565a8d8adfc6e5476a4f460f725358a59a0e16f1acaeb69544456657cc45ddb" protocol=ttrpc version=3 Jun 20 19:18:54.229578 systemd[1]: Started cri-containerd-8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d.scope - libcontainer container 8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d. Jun 20 19:18:54.256322 containerd[1730]: time="2025-06-20T19:18:54.256199007Z" level=info msg="StartContainer for \"8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d\" returns successfully" Jun 20 19:18:54.260351 systemd[1]: cri-containerd-8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d.scope: Deactivated successfully. Jun 20 19:18:54.263171 containerd[1730]: time="2025-06-20T19:18:54.263103192Z" level=info msg="received exit event container_id:\"8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d\" id:\"8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d\" pid:4890 exited_at:{seconds:1750447134 nanos:262876925}" Jun 20 19:18:54.263331 containerd[1730]: time="2025-06-20T19:18:54.263305935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d\" id:\"8b25864af4e1fdaa40a6b9b08bad90ce7968bf6820c39fb77d57a7a01b56602d\" pid:4890 exited_at:{seconds:1750447134 nanos:262876925}" Jun 20 19:18:54.277934 sshd[4829]: Accepted publickey for core from 10.200.16.10 port 34702 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:54.279228 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:54.283331 systemd-logind[1706]: New session 27 of user core. Jun 20 19:18:54.286560 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:18:54.656841 containerd[1730]: time="2025-06-20T19:18:54.656791738Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:18:54.671185 containerd[1730]: time="2025-06-20T19:18:54.670933724Z" level=info msg="Container 05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:54.682288 containerd[1730]: time="2025-06-20T19:18:54.682258352Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124\"" Jun 20 19:18:54.682848 containerd[1730]: time="2025-06-20T19:18:54.682818592Z" level=info msg="StartContainer for \"05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124\"" Jun 20 19:18:54.684045 containerd[1730]: time="2025-06-20T19:18:54.683978338Z" level=info msg="connecting to shim 05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124" address="unix:///run/containerd/s/5565a8d8adfc6e5476a4f460f725358a59a0e16f1acaeb69544456657cc45ddb" protocol=ttrpc version=3 Jun 20 19:18:54.701610 systemd[1]: Started cri-containerd-05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124.scope - libcontainer container 05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124. Jun 20 19:18:54.729864 containerd[1730]: time="2025-06-20T19:18:54.729829234Z" level=info msg="StartContainer for \"05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124\" returns successfully" Jun 20 19:18:54.735372 systemd[1]: cri-containerd-05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124.scope: Deactivated successfully. Jun 20 19:18:54.737623 containerd[1730]: time="2025-06-20T19:18:54.737583073Z" level=info msg="received exit event container_id:\"05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124\" id:\"05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124\" pid:4944 exited_at:{seconds:1750447134 nanos:737089348}" Jun 20 19:18:54.738020 containerd[1730]: time="2025-06-20T19:18:54.737912230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124\" id:\"05a116bfccf6425c5913f7690e8ad74fa35e821540dfd92aada6555e33df1124\" pid:4944 exited_at:{seconds:1750447134 nanos:737089348}" Jun 20 19:18:55.385428 kubelet[3138]: E0620 19:18:55.385362 3138 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:18:55.656757 containerd[1730]: time="2025-06-20T19:18:55.656319753Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:18:55.679485 containerd[1730]: time="2025-06-20T19:18:55.679098253Z" level=info msg="Container dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:55.708683 containerd[1730]: time="2025-06-20T19:18:55.708638402Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba\"" Jun 20 19:18:55.709592 containerd[1730]: time="2025-06-20T19:18:55.709546180Z" level=info msg="StartContainer for \"dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba\"" Jun 20 19:18:55.712406 containerd[1730]: time="2025-06-20T19:18:55.712324906Z" level=info msg="connecting to shim dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba" address="unix:///run/containerd/s/5565a8d8adfc6e5476a4f460f725358a59a0e16f1acaeb69544456657cc45ddb" protocol=ttrpc version=3 Jun 20 19:18:55.752752 systemd[1]: Started cri-containerd-dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba.scope - libcontainer container dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba. Jun 20 19:18:55.809633 systemd[1]: cri-containerd-dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba.scope: Deactivated successfully. Jun 20 19:18:55.810632 containerd[1730]: time="2025-06-20T19:18:55.810583496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba\" id:\"dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba\" pid:4987 exited_at:{seconds:1750447135 nanos:810228422}" Jun 20 19:18:55.811857 containerd[1730]: time="2025-06-20T19:18:55.811823132Z" level=info msg="received exit event container_id:\"dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba\" id:\"dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba\" pid:4987 exited_at:{seconds:1750447135 nanos:810228422}" Jun 20 19:18:55.821490 containerd[1730]: time="2025-06-20T19:18:55.821424593Z" level=info msg="StartContainer for \"dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba\" returns successfully" Jun 20 19:18:55.833007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dad8e663f8becfffac5cd9fd33cf990915d9ac8b4abcc5a4e946e1f546d764ba-rootfs.mount: Deactivated successfully. Jun 20 19:18:56.662713 containerd[1730]: time="2025-06-20T19:18:56.662665183Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:18:56.685407 containerd[1730]: time="2025-06-20T19:18:56.685348387Z" level=info msg="Container 1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:56.701298 containerd[1730]: time="2025-06-20T19:18:56.701263233Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195\"" Jun 20 19:18:56.701728 containerd[1730]: time="2025-06-20T19:18:56.701706584Z" level=info msg="StartContainer for \"1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195\"" Jun 20 19:18:56.702617 containerd[1730]: time="2025-06-20T19:18:56.702582830Z" level=info msg="connecting to shim 1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195" address="unix:///run/containerd/s/5565a8d8adfc6e5476a4f460f725358a59a0e16f1acaeb69544456657cc45ddb" protocol=ttrpc version=3 Jun 20 19:18:56.726591 systemd[1]: Started cri-containerd-1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195.scope - libcontainer container 1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195. Jun 20 19:18:56.749867 systemd[1]: cri-containerd-1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195.scope: Deactivated successfully. Jun 20 19:18:56.751237 containerd[1730]: time="2025-06-20T19:18:56.751198411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195\" id:\"1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195\" pid:5029 exited_at:{seconds:1750447136 nanos:750659904}" Jun 20 19:18:56.756070 containerd[1730]: time="2025-06-20T19:18:56.755907858Z" level=info msg="received exit event container_id:\"1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195\" id:\"1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195\" pid:5029 exited_at:{seconds:1750447136 nanos:750659904}" Jun 20 19:18:56.761947 containerd[1730]: time="2025-06-20T19:18:56.761923302Z" level=info msg="StartContainer for \"1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195\" returns successfully" Jun 20 19:18:56.774142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1529bec074ee945ecadb5285927110f7eb8b42c0158d61deb1bc64e81f7be195-rootfs.mount: Deactivated successfully. Jun 20 19:18:57.665636 containerd[1730]: time="2025-06-20T19:18:57.665591409Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:18:57.690634 containerd[1730]: time="2025-06-20T19:18:57.690579214Z" level=info msg="Container c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:57.710558 containerd[1730]: time="2025-06-20T19:18:57.710512972Z" level=info msg="CreateContainer within sandbox \"fb95f33223b7860211261e3788d73ff7e0615a12a1da8811f113e0ee63c896c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\"" Jun 20 19:18:57.711069 containerd[1730]: time="2025-06-20T19:18:57.711048089Z" level=info msg="StartContainer for \"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\"" Jun 20 19:18:57.712207 containerd[1730]: time="2025-06-20T19:18:57.712136078Z" level=info msg="connecting to shim c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a" address="unix:///run/containerd/s/5565a8d8adfc6e5476a4f460f725358a59a0e16f1acaeb69544456657cc45ddb" protocol=ttrpc version=3 Jun 20 19:18:57.734606 systemd[1]: Started cri-containerd-c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a.scope - libcontainer container c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a. Jun 20 19:18:57.767771 containerd[1730]: time="2025-06-20T19:18:57.767706876Z" level=info msg="StartContainer for \"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\" returns successfully" Jun 20 19:18:57.837705 containerd[1730]: time="2025-06-20T19:18:57.837658593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\" id:\"ef9c5fc5dac4ea843410689b252ab616dde51564d431cecc7fd3afadcd596088\" pid:5099 exited_at:{seconds:1750447137 nanos:837166975}" Jun 20 19:18:58.175463 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jun 20 19:18:58.689101 kubelet[3138]: I0620 19:18:58.689045 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tdmdf" podStartSLOduration=6.689025083 podStartE2EDuration="6.689025083s" podCreationTimestamp="2025-06-20 19:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:18:58.688177695 +0000 UTC m=+163.524484284" watchObservedRunningTime="2025-06-20 19:18:58.689025083 +0000 UTC m=+163.525331644" Jun 20 19:18:58.776951 containerd[1730]: time="2025-06-20T19:18:58.776902796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\" id:\"76a22746e65094b274c8eff4465d7302ead81736a6b6b8d881179cb82b99c628\" pid:5173 exit_status:1 exited_at:{seconds:1750447138 nanos:776340702}" Jun 20 19:18:59.291834 kubelet[3138]: I0620 19:18:59.291780 3138 setters.go:600] "Node became not ready" node="ci-4344.1.0-a-a13bac16ef" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:18:59Z","lastTransitionTime":"2025-06-20T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:19:00.661244 systemd-networkd[1368]: lxc_health: Link UP Jun 20 19:19:00.672629 systemd-networkd[1368]: lxc_health: Gained carrier Jun 20 19:19:00.900799 containerd[1730]: time="2025-06-20T19:19:00.900752315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\" id:\"53286d68e5a94d32b6da0f109aba20ed83fb18b7c46ad4089b6e84582d15fc08\" pid:5612 exited_at:{seconds:1750447140 nanos:900171043}" Jun 20 19:19:01.825616 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jun 20 19:19:03.019192 containerd[1730]: time="2025-06-20T19:19:03.018929068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\" id:\"f6643ab4a4cf6f4cc2867419472aa4f92000daa886fa22cff87c3816f05164c1\" pid:5647 exited_at:{seconds:1750447143 nanos:18023018}" Jun 20 19:19:05.111144 containerd[1730]: time="2025-06-20T19:19:05.111095338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\" id:\"fef04b3c26af64add098db6f60b4c2f967fbdcded4e313e70b796e8dbb3c8e10\" pid:5687 exited_at:{seconds:1750447145 nanos:110741309}" Jun 20 19:19:05.113505 kubelet[3138]: E0620 19:19:05.113239 3138 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42994->127.0.0.1:33455: write tcp 127.0.0.1:42994->127.0.0.1:33455: write: broken pipe Jun 20 19:19:07.196986 containerd[1730]: time="2025-06-20T19:19:07.196936122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c95d4b914f27c428c9ffec647dde89b070f72c9c17c97a6b6d52fc0f30c8748a\" id:\"5a6ff568739b1f20305a6d5647278afb453e761533a8a551be18e49e16717184\" pid:5710 exited_at:{seconds:1750447147 nanos:196598206}" Jun 20 19:19:07.302967 sshd[4922]: Connection closed by 10.200.16.10 port 34702 Jun 20 19:19:07.303287 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:07.306145 systemd[1]: sshd@24-10.200.4.21:22-10.200.16.10:34702.service: Deactivated successfully. Jun 20 19:19:07.307909 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:19:07.309737 systemd-logind[1706]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:19:07.310565 systemd-logind[1706]: Removed session 27.