Jun 20 19:13:03.007789 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:13:03.007822 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:03.007833 kernel: BIOS-provided physical RAM map: Jun 20 19:13:03.007841 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 19:13:03.007847 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 20 19:13:03.007854 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jun 20 19:13:03.007863 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jun 20 19:13:03.007871 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jun 20 19:13:03.007878 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jun 20 19:13:03.007885 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 20 19:13:03.007892 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 20 19:13:03.007899 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 20 19:13:03.007906 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 20 19:13:03.007913 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 20 19:13:03.007924 kernel: NX (Execute Disable) protection: active Jun 20 19:13:03.007932 kernel: APIC: Static calls initialized Jun 20 19:13:03.007939 kernel: efi: EFI v2.7 by Microsoft Jun 20 19:13:03.007947 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Jun 20 19:13:03.007955 kernel: random: crng init done Jun 20 19:13:03.007962 kernel: secureboot: Secure boot disabled Jun 20 19:13:03.007970 kernel: SMBIOS 3.1.0 present. Jun 20 19:13:03.007977 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jun 20 19:13:03.007985 kernel: DMI: Memory slots populated: 2/2 Jun 20 19:13:03.007994 kernel: Hypervisor detected: Microsoft Hyper-V Jun 20 19:13:03.008002 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 20 19:13:03.008009 kernel: Hyper-V: Nested features: 0x3e0101 Jun 20 19:13:03.008016 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 20 19:13:03.008024 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 20 19:13:03.008032 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:13:03.008040 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:13:03.008047 kernel: tsc: Detected 2300.000 MHz processor Jun 20 19:13:03.008055 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:13:03.008063 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:13:03.008073 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 20 19:13:03.008081 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 19:13:03.008089 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:13:03.008096 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 20 19:13:03.008104 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 20 19:13:03.008111 kernel: Using GB pages for direct mapping Jun 20 19:13:03.008119 kernel: ACPI: Early table checksum verification disabled Jun 20 19:13:03.008130 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 20 19:13:03.008140 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.008148 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.008156 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 19:13:03.008163 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 20 19:13:03.008172 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.008180 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.008190 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.008199 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:13:03.008208 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:13:03.008216 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:03.008224 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 20 19:13:03.008232 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jun 20 19:13:03.008241 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 20 19:13:03.008249 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 20 19:13:03.008257 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 20 19:13:03.008268 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 20 19:13:03.008277 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 20 19:13:03.008284 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 20 19:13:03.008292 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 20 19:13:03.008301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 20 19:13:03.008309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 20 19:13:03.008317 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 20 19:13:03.008325 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 20 19:13:03.008334 kernel: Zone ranges: Jun 20 19:13:03.008344 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:13:03.008352 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:13:03.008360 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:13:03.008368 kernel: Device empty Jun 20 19:13:03.008377 kernel: Movable zone start for each node Jun 20 19:13:03.008385 kernel: Early memory node ranges Jun 20 19:13:03.008394 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 19:13:03.008402 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jun 20 19:13:03.008411 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jun 20 19:13:03.008419 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 20 19:13:03.008426 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:13:03.008433 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 20 19:13:03.011296 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:13:03.011364 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 19:13:03.011373 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jun 20 19:13:03.011383 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jun 20 19:13:03.011392 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 20 19:13:03.011400 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:13:03.011413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:13:03.011421 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:13:03.011430 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 20 19:13:03.011448 kernel: TSC deadline timer available Jun 20 19:13:03.011460 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:13:03.011473 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:13:03.011480 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:13:03.011487 kernel: CPU topo: Max. threads per core: 2 Jun 20 19:13:03.011495 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:13:03.011507 kernel: CPU topo: Num. threads per package: 2 Jun 20 19:13:03.011516 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:13:03.011524 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 20 19:13:03.011533 kernel: Booting paravirtualized kernel on Hyper-V Jun 20 19:13:03.011542 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:13:03.011551 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:13:03.011560 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:13:03.011569 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:13:03.011577 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:13:03.011588 kernel: Hyper-V: PV spinlocks enabled Jun 20 19:13:03.011597 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:13:03.011607 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:03.011617 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:13:03.011626 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 19:13:03.011635 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:13:03.011644 kernel: Fallback order for Node 0: 0 Jun 20 19:13:03.011652 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jun 20 19:13:03.011663 kernel: Policy zone: Normal Jun 20 19:13:03.011672 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:13:03.011681 kernel: software IO TLB: area num 2. Jun 20 19:13:03.011690 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:13:03.011699 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:13:03.011707 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:13:03.011716 kernel: Dynamic Preempt: voluntary Jun 20 19:13:03.011725 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:13:03.011735 kernel: rcu: RCU event tracing is enabled. Jun 20 19:13:03.011752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:13:03.011762 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:13:03.011771 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:13:03.011782 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:13:03.011791 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:13:03.011801 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:13:03.011811 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:03.011820 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:03.011830 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:03.011840 kernel: Using NULL legacy PIC Jun 20 19:13:03.011851 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 20 19:13:03.011860 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:13:03.011896 kernel: Console: colour dummy device 80x25 Jun 20 19:13:03.011906 kernel: printk: legacy console [tty1] enabled Jun 20 19:13:03.011915 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:13:03.011925 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 20 19:13:03.011934 kernel: ACPI: Core revision 20240827 Jun 20 19:13:03.011945 kernel: Failed to register legacy timer interrupt Jun 20 19:13:03.011954 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:13:03.011963 kernel: x2apic enabled Jun 20 19:13:03.011973 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:13:03.011982 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jun 20 19:13:03.011991 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 19:13:03.012001 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 20 19:13:03.012011 kernel: Hyper-V: Using IPI hypercalls Jun 20 19:13:03.012020 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 20 19:13:03.012031 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 20 19:13:03.012041 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 20 19:13:03.012050 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 20 19:13:03.012060 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 20 19:13:03.012070 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 20 19:13:03.012079 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 20 19:13:03.012095 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jun 20 19:13:03.012104 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:13:03.012116 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:13:03.012125 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:13:03.012134 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:13:03.012143 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:13:03.012152 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:13:03.012162 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 19:13:03.012171 kernel: RETBleed: Vulnerable Jun 20 19:13:03.012180 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:13:03.012189 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:13:03.012198 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:13:03.012207 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:13:03.012218 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:13:03.012226 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 19:13:03.012236 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 19:13:03.012245 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 19:13:03.012254 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 20 19:13:03.012263 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 20 19:13:03.012272 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 20 19:13:03.012281 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:13:03.012290 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 20 19:13:03.012299 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 20 19:13:03.012308 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 20 19:13:03.012319 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 20 19:13:03.012328 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 20 19:13:03.012337 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 20 19:13:03.012346 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 20 19:13:03.012355 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:13:03.012364 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:13:03.012373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:13:03.012382 kernel: landlock: Up and running. Jun 20 19:13:03.012391 kernel: SELinux: Initializing. Jun 20 19:13:03.012400 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:13:03.012410 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:13:03.012419 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 20 19:13:03.012430 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 20 19:13:03.014783 kernel: signal: max sigframe size: 11952 Jun 20 19:13:03.014814 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:13:03.014825 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:13:03.014835 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:13:03.014845 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:13:03.014855 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:13:03.014865 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:13:03.014874 kernel: .... node #0, CPUs: #1 Jun 20 19:13:03.014888 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:13:03.014898 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jun 20 19:13:03.014908 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299988K reserved, 0K cma-reserved) Jun 20 19:13:03.014918 kernel: devtmpfs: initialized Jun 20 19:13:03.014927 kernel: x86/mm: Memory block size: 128MB Jun 20 19:13:03.014936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 20 19:13:03.014946 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:13:03.014955 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:13:03.014965 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:13:03.014976 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:13:03.014985 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:13:03.014995 kernel: audit: type=2000 audit(1750446779.030:1): state=initialized audit_enabled=0 res=1 Jun 20 19:13:03.015004 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:13:03.015014 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:13:03.015023 kernel: cpuidle: using governor menu Jun 20 19:13:03.015032 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:13:03.015042 kernel: dca service started, version 1.12.1 Jun 20 19:13:03.015051 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jun 20 19:13:03.015063 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jun 20 19:13:03.015072 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:13:03.015081 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:13:03.015091 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:13:03.015100 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:13:03.015109 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:13:03.015118 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:13:03.015128 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:13:03.015139 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:13:03.015148 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:13:03.015157 kernel: ACPI: Interpreter enabled Jun 20 19:13:03.015166 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:13:03.015175 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:13:03.015185 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:13:03.015194 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 19:13:03.015203 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 20 19:13:03.015212 kernel: iommu: Default domain type: Translated Jun 20 19:13:03.015222 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:13:03.015233 kernel: efivars: Registered efivars operations Jun 20 19:13:03.015242 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:13:03.015252 kernel: PCI: System does not support PCI Jun 20 19:13:03.015261 kernel: vgaarb: loaded Jun 20 19:13:03.015270 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:13:03.015279 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:13:03.015289 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:13:03.015298 kernel: pnp: PnP ACPI init Jun 20 19:13:03.015307 kernel: pnp: PnP ACPI: found 3 devices Jun 20 19:13:03.015318 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:13:03.015327 kernel: NET: Registered PF_INET protocol family Jun 20 19:13:03.015337 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:13:03.015346 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 19:13:03.015356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:13:03.015365 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:13:03.015374 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:13:03.015383 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 19:13:03.015395 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:13:03.015404 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:13:03.015413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:13:03.015422 kernel: NET: Registered PF_XDP protocol family Jun 20 19:13:03.015431 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:13:03.015453 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:13:03.015463 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Jun 20 19:13:03.015472 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 20 19:13:03.015481 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 20 19:13:03.015493 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 20 19:13:03.015503 kernel: clocksource: Switched to clocksource tsc Jun 20 19:13:03.015512 kernel: Initialise system trusted keyrings Jun 20 19:13:03.015520 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 19:13:03.015530 kernel: Key type asymmetric registered Jun 20 19:13:03.015539 kernel: Asymmetric key parser 'x509' registered Jun 20 19:13:03.015548 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:13:03.015558 kernel: io scheduler mq-deadline registered Jun 20 19:13:03.015567 kernel: io scheduler kyber registered Jun 20 19:13:03.015579 kernel: io scheduler bfq registered Jun 20 19:13:03.015588 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:13:03.015597 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:13:03.015607 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:13:03.015616 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 19:13:03.015625 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:13:03.015635 kernel: i8042: PNP: No PS/2 controller found. Jun 20 19:13:03.015792 kernel: rtc_cmos 00:02: registered as rtc0 Jun 20 19:13:03.015877 kernel: rtc_cmos 00:02: setting system clock to 2025-06-20T19:13:02 UTC (1750446782) Jun 20 19:13:03.015950 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 20 19:13:03.015961 kernel: intel_pstate: Intel P-state driver initializing Jun 20 19:13:03.015971 kernel: efifb: probing for efifb Jun 20 19:13:03.015980 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 19:13:03.015990 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 19:13:03.015999 kernel: efifb: scrolling: redraw Jun 20 19:13:03.016009 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:13:03.016018 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:13:03.016029 kernel: fb0: EFI VGA frame buffer device Jun 20 19:13:03.016039 kernel: pstore: Using crash dump compression: deflate Jun 20 19:13:03.016048 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:13:03.016057 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:13:03.016067 kernel: Segment Routing with IPv6 Jun 20 19:13:03.016076 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:13:03.016086 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:13:03.016095 kernel: Key type dns_resolver registered Jun 20 19:13:03.016104 kernel: IPI shorthand broadcast: enabled Jun 20 19:13:03.016115 kernel: sched_clock: Marking stable (3105003950, 96618402)->(3576034919, -374412567) Jun 20 19:13:03.016124 kernel: registered taskstats version 1 Jun 20 19:13:03.016133 kernel: Loading compiled-in X.509 certificates Jun 20 19:13:03.016142 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:13:03.016152 kernel: Demotion targets for Node 0: null Jun 20 19:13:03.016161 kernel: Key type .fscrypt registered Jun 20 19:13:03.016170 kernel: Key type fscrypt-provisioning registered Jun 20 19:13:03.016179 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:13:03.016189 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:13:03.016200 kernel: ima: No architecture policies found Jun 20 19:13:03.016209 kernel: clk: Disabling unused clocks Jun 20 19:13:03.016219 kernel: Warning: unable to open an initial console. Jun 20 19:13:03.016228 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:13:03.016237 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:13:03.016247 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:13:03.016256 kernel: Run /init as init process Jun 20 19:13:03.016265 kernel: with arguments: Jun 20 19:13:03.016274 kernel: /init Jun 20 19:13:03.016285 kernel: with environment: Jun 20 19:13:03.016294 kernel: HOME=/ Jun 20 19:13:03.016303 kernel: TERM=linux Jun 20 19:13:03.016312 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:13:03.016323 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:13:03.016337 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:13:03.016348 systemd[1]: Detected virtualization microsoft. Jun 20 19:13:03.016359 systemd[1]: Detected architecture x86-64. Jun 20 19:13:03.016369 systemd[1]: Running in initrd. Jun 20 19:13:03.016378 systemd[1]: No hostname configured, using default hostname. Jun 20 19:13:03.016388 systemd[1]: Hostname set to . Jun 20 19:13:03.016398 systemd[1]: Initializing machine ID from random generator. Jun 20 19:13:03.016408 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:13:03.016418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:03.016428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:03.019163 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:13:03.019183 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:13:03.019194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:13:03.019206 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:13:03.019218 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:13:03.019229 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:13:03.019239 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:03.019253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:03.019263 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:13:03.019273 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:13:03.019283 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:13:03.019294 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:13:03.019304 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:13:03.019314 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:13:03.019324 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:13:03.019334 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:13:03.019346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:03.019356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:03.019366 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:03.019377 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:13:03.019387 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:13:03.019397 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:13:03.019407 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:13:03.019417 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:13:03.019431 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:13:03.019458 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:13:03.019468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:13:03.019488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:03.019524 systemd-journald[205]: Collecting audit messages is disabled. Jun 20 19:13:03.019553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:13:03.019564 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:03.019576 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:13:03.019587 systemd-journald[205]: Journal started Jun 20 19:13:03.019612 systemd-journald[205]: Runtime Journal (/run/log/journal/66d52f1a6e034d5b9afa1b4342951072) is 8M, max 158.9M, 150.9M free. Jun 20 19:13:03.002616 systemd-modules-load[207]: Inserted module 'overlay' Jun 20 19:13:03.029782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:13:03.029831 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:13:03.035548 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:13:03.044582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:03.048484 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:13:03.053604 kernel: Bridge firewalling registered Jun 20 19:13:03.054566 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:13:03.055786 systemd-modules-load[207]: Inserted module 'br_netfilter' Jun 20 19:13:03.060474 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:13:03.073538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:03.073771 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:13:03.073908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:03.075715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:13:03.081699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:13:03.099913 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:13:03.106690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:03.106884 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:03.109547 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:13:03.119650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:13:03.141314 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:03.170760 systemd-resolved[245]: Positive Trust Anchors: Jun 20 19:13:03.170773 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:13:03.170808 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:13:03.192690 systemd-resolved[245]: Defaulting to hostname 'linux'. Jun 20 19:13:03.195949 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:13:03.200677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:03.219458 kernel: SCSI subsystem initialized Jun 20 19:13:03.226457 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:13:03.236457 kernel: iscsi: registered transport (tcp) Jun 20 19:13:03.254623 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:13:03.254676 kernel: QLogic iSCSI HBA Driver Jun 20 19:13:03.269573 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:13:03.285303 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:03.288727 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:13:03.320998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:13:03.325235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:13:03.373463 kernel: raid6: avx512x4 gen() 44680 MB/s Jun 20 19:13:03.390450 kernel: raid6: avx512x2 gen() 43346 MB/s Jun 20 19:13:03.407448 kernel: raid6: avx512x1 gen() 24959 MB/s Jun 20 19:13:03.425452 kernel: raid6: avx2x4 gen() 34737 MB/s Jun 20 19:13:03.443451 kernel: raid6: avx2x2 gen() 36323 MB/s Jun 20 19:13:03.461503 kernel: raid6: avx2x1 gen() 27703 MB/s Jun 20 19:13:03.461518 kernel: raid6: using algorithm avx512x4 gen() 44680 MB/s Jun 20 19:13:03.481696 kernel: raid6: .... xor() 7155 MB/s, rmw enabled Jun 20 19:13:03.481710 kernel: raid6: using avx512x2 recovery algorithm Jun 20 19:13:03.500460 kernel: xor: automatically using best checksumming function avx Jun 20 19:13:03.622492 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:13:03.627581 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:13:03.630590 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:03.652534 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jun 20 19:13:03.656698 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:03.663567 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:13:03.678178 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jun 20 19:13:03.698160 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:13:03.703513 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:13:03.737980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:03.744999 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:13:03.788478 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:13:03.801471 kernel: AES CTR mode by8 optimization enabled Jun 20 19:13:03.822972 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:03.830174 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 19:13:03.823047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:03.833162 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:03.844864 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 19:13:03.844907 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 19:13:03.854042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:03.869275 kernel: PTP clock support registered Jun 20 19:13:03.869320 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 19:13:03.882926 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 19:13:03.882974 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 19:13:03.887473 kernel: scsi host0: storvsc_host_t Jun 20 19:13:03.770956 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 19:13:03.776317 kernel: hv_vmbus: registering driver hv_utils Jun 20 19:13:03.776337 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 19:13:03.776346 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 19:13:03.776354 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 19:13:03.776489 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 19:13:03.776502 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dcf37 (unnamed net_device) (uninitialized): VF slot 1 added Jun 20 19:13:03.776656 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:13:03.776665 systemd-journald[205]: Time jumped backwards, rotating. Jun 20 19:13:03.776710 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 19:13:03.755949 systemd-resolved[245]: Clock change detected. Flushing caches. Jun 20 19:13:03.767077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:03.791383 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 19:13:03.799377 kernel: hv_vmbus: registering driver hv_pci Jun 20 19:13:03.799422 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 19:13:03.799434 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 20 19:13:03.801928 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 19:13:03.818151 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 20 19:13:03.818388 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 20 19:13:03.822237 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 19:13:03.822392 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:13:03.822482 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:13:03.824532 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 20 19:13:03.826440 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 19:13:03.826584 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 20 19:13:03.842471 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 20 19:13:03.846805 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:13:03.847067 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 20 19:13:03.863424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:03.863772 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 20 19:13:03.866379 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 20 19:13:03.886459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#292 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:04.128493 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 19:13:04.135457 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:04.417387 kernel: nvme nvme0: using unchecked data buffer Jun 20 19:13:04.673601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:13:04.691897 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 20 19:13:04.703990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 20 19:13:04.725167 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:13:04.731422 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:13:04.737608 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:13:04.738282 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:13:04.738334 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:04.738383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:13:04.741483 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:13:04.742157 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:13:04.777864 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:13:04.782405 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:04.788420 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 20 19:13:04.795547 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 20 19:13:04.795886 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 20 19:13:04.796188 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:13:04.803442 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 20 19:13:04.811579 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 20 19:13:04.811631 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 20 19:13:04.816263 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 20 19:13:04.834318 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:13:04.834569 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 20 19:13:04.840390 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 20 19:13:04.855684 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 20 19:13:04.869401 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jun 20 19:13:04.875097 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dcf37 eth0: VF registering: eth1 Jun 20 19:13:04.875278 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jun 20 19:13:04.885374 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jun 20 19:13:05.796764 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:05.796835 disk-uuid[671]: The operation has completed successfully. Jun 20 19:13:05.850544 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:13:05.850638 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:13:05.885590 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:13:05.897848 sh[712]: Success Jun 20 19:13:05.936822 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:13:05.936898 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:13:05.936913 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:13:05.947381 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:13:06.222995 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:13:06.229469 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:13:06.246692 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:13:06.273413 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:13:06.273467 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (725) Jun 20 19:13:06.276377 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:13:06.279176 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:06.280651 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:13:06.581864 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:13:06.585885 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:13:06.589269 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:13:06.589979 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:13:06.600077 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:13:06.629393 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (758) Jun 20 19:13:06.635042 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:06.635094 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:06.636244 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:06.677417 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:06.677769 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:13:06.681615 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:13:06.682084 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:13:06.688631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:13:06.716548 systemd-networkd[894]: lo: Link UP Jun 20 19:13:06.716557 systemd-networkd[894]: lo: Gained carrier Jun 20 19:13:06.718428 systemd-networkd[894]: Enumeration completed Jun 20 19:13:06.724544 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:13:06.718815 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:06.731670 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:06.731858 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dcf37 eth0: Data path switched to VF: enP30832s1 Jun 20 19:13:06.718819 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:06.719302 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:13:06.724579 systemd[1]: Reached target network.target - Network. Jun 20 19:13:06.731858 systemd-networkd[894]: enP30832s1: Link UP Jun 20 19:13:06.731922 systemd-networkd[894]: eth0: Link UP Jun 20 19:13:06.732011 systemd-networkd[894]: eth0: Gained carrier Jun 20 19:13:06.732020 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:06.739535 systemd-networkd[894]: enP30832s1: Gained carrier Jun 20 19:13:06.749394 systemd-networkd[894]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:07.593338 ignition[892]: Ignition 2.21.0 Jun 20 19:13:07.593351 ignition[892]: Stage: fetch-offline Jun 20 19:13:07.596029 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:13:07.593472 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:07.599721 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:13:07.593480 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:07.593568 ignition[892]: parsed url from cmdline: "" Jun 20 19:13:07.593571 ignition[892]: no config URL provided Jun 20 19:13:07.593576 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:13:07.593581 ignition[892]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:13:07.593586 ignition[892]: failed to fetch config: resource requires networking Jun 20 19:13:07.593749 ignition[892]: Ignition finished successfully Jun 20 19:13:07.627589 ignition[904]: Ignition 2.21.0 Jun 20 19:13:07.627599 ignition[904]: Stage: fetch Jun 20 19:13:07.627822 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:07.627831 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:07.627918 ignition[904]: parsed url from cmdline: "" Jun 20 19:13:07.627921 ignition[904]: no config URL provided Jun 20 19:13:07.627926 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:13:07.627933 ignition[904]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:13:07.627980 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 19:13:07.758509 ignition[904]: GET result: OK Jun 20 19:13:07.758643 ignition[904]: config has been read from IMDS userdata Jun 20 19:13:07.758684 ignition[904]: parsing config with SHA512: e18f1c0dba3a25b4ecbab5cdff760aaf00934f816d0c082b8fe69717fed56c3baafd41ea4c01244352e14b20d181187de2a377832a9323f55d6f55f0ff6f55a8 Jun 20 19:13:07.767013 unknown[904]: fetched base config from "system" Jun 20 19:13:07.767035 unknown[904]: fetched base config from "system" Jun 20 19:13:07.767496 ignition[904]: fetch: fetch complete Jun 20 19:13:07.767039 unknown[904]: fetched user config from "azure" Jun 20 19:13:07.767501 ignition[904]: fetch: fetch passed Jun 20 19:13:07.769825 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:13:07.767545 ignition[904]: Ignition finished successfully Jun 20 19:13:07.774515 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:13:07.796826 ignition[910]: Ignition 2.21.0 Jun 20 19:13:07.796837 ignition[910]: Stage: kargs Jun 20 19:13:07.797045 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:07.797053 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:07.802841 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:13:07.801387 ignition[910]: kargs: kargs passed Jun 20 19:13:07.804498 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:13:07.801452 ignition[910]: Ignition finished successfully Jun 20 19:13:07.816606 systemd-networkd[894]: eth0: Gained IPv6LL Jun 20 19:13:07.825925 ignition[918]: Ignition 2.21.0 Jun 20 19:13:07.825936 ignition[918]: Stage: disks Jun 20 19:13:07.826169 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:07.828218 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:13:07.826179 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:07.827143 ignition[918]: disks: disks passed Jun 20 19:13:07.827179 ignition[918]: Ignition finished successfully Jun 20 19:13:07.837821 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:13:07.840805 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:13:07.842970 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:13:07.852407 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:13:07.855410 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:13:07.858497 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:13:07.928618 systemd-fsck[927]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 19:13:07.932739 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:13:07.939218 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:13:08.214370 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:13:08.215045 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:13:08.215898 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:13:08.235738 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:13:08.239714 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:13:08.256499 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:13:08.263809 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:13:08.263845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:13:08.275581 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:13:08.280702 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (936) Jun 20 19:13:08.280728 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:08.281810 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:13:08.286787 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:08.286815 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:08.289024 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:13:08.392524 systemd-networkd[894]: enP30832s1: Gained IPv6LL Jun 20 19:13:08.800439 coreos-metadata[938]: Jun 20 19:13:08.800 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:13:08.803417 coreos-metadata[938]: Jun 20 19:13:08.803 INFO Fetch successful Jun 20 19:13:08.804968 coreos-metadata[938]: Jun 20 19:13:08.803 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:13:08.816108 coreos-metadata[938]: Jun 20 19:13:08.816 INFO Fetch successful Jun 20 19:13:08.832014 coreos-metadata[938]: Jun 20 19:13:08.831 INFO wrote hostname ci-4344.1.0-a-abe1e11a87 to /sysroot/etc/hostname Jun 20 19:13:08.834113 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:13:08.966758 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:13:09.024060 initrd-setup-root[973]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:13:09.045405 initrd-setup-root[980]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:13:09.050322 initrd-setup-root[987]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:13:10.072484 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:13:10.076438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:13:10.094511 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:13:10.101070 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:13:10.104013 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:10.132173 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:13:10.136199 ignition[1059]: INFO : Ignition 2.21.0 Jun 20 19:13:10.136199 ignition[1059]: INFO : Stage: mount Jun 20 19:13:10.148011 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:10.148011 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:10.148011 ignition[1059]: INFO : mount: mount passed Jun 20 19:13:10.148011 ignition[1059]: INFO : Ignition finished successfully Jun 20 19:13:10.139155 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:13:10.148451 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:13:10.168670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:13:10.187377 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1070) Jun 20 19:13:10.189826 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:10.189856 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:10.189868 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:10.195107 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:13:10.221626 ignition[1087]: INFO : Ignition 2.21.0 Jun 20 19:13:10.221626 ignition[1087]: INFO : Stage: files Jun 20 19:13:10.226402 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:10.226402 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:10.226402 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:13:10.238944 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:13:10.242460 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:13:10.288805 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:13:10.291604 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:13:10.291604 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:13:10.291604 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:13:10.291604 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:13:10.289248 unknown[1087]: wrote ssh authorized keys file for user: core Jun 20 19:13:10.329705 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:13:10.446132 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:13:10.446132 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:13:10.446132 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:13:10.943493 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:13:10.999145 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:13:11.001625 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:13:11.001625 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:13:11.001625 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:13:11.001625 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:13:11.001625 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:13:11.001625 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:13:11.001625 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:13:11.001625 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:13:11.030039 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:13:11.032874 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:13:11.032874 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:13:11.040403 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:13:11.040403 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:13:11.040403 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:13:11.809692 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:13:12.075409 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:13:12.075409 ignition[1087]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:13:12.110649 ignition[1087]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:13:12.121568 ignition[1087]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:13:12.121568 ignition[1087]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:13:12.121568 ignition[1087]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:13:12.135731 ignition[1087]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:13:12.135731 ignition[1087]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:13:12.135731 ignition[1087]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:13:12.135731 ignition[1087]: INFO : files: files passed Jun 20 19:13:12.135731 ignition[1087]: INFO : Ignition finished successfully Jun 20 19:13:12.125456 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:13:12.129616 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:13:12.141478 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:13:12.152559 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:13:12.152652 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:13:12.171389 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:12.171389 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:12.177871 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:12.175952 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:13:12.181881 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:13:12.184474 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:13:12.234083 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:13:12.234186 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:13:12.238606 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:13:12.243419 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:13:12.246298 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:13:12.247056 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:13:12.271712 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:13:12.274470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:13:12.289755 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:12.289926 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:12.290265 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:13:12.297896 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:13:12.298052 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:13:12.306038 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:13:12.307619 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:13:12.312521 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:13:12.315664 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:13:12.320514 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:13:12.323680 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:13:12.326835 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:13:12.329604 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:13:12.338526 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:13:12.341684 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:13:12.344236 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:13:12.345843 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:13:12.346005 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:13:12.350534 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:12.351825 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:12.352138 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:13:12.354264 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:12.368470 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:13:12.368616 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:13:12.373760 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:13:12.373870 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:13:12.376670 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:13:12.376772 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:13:12.379807 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:13:12.379928 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:13:12.388781 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:13:12.398708 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:13:12.398914 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:12.412523 ignition[1141]: INFO : Ignition 2.21.0 Jun 20 19:13:12.412523 ignition[1141]: INFO : Stage: umount Jun 20 19:13:12.412523 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:12.412523 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:12.412523 ignition[1141]: INFO : umount: umount passed Jun 20 19:13:12.412523 ignition[1141]: INFO : Ignition finished successfully Jun 20 19:13:12.414471 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:13:12.417553 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:13:12.417707 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:12.421606 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:13:12.421736 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:13:12.439924 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:13:12.440034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:13:12.442832 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:13:12.443058 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:13:12.449299 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:13:12.449354 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:13:12.452849 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:13:12.452896 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:13:12.460414 systemd[1]: Stopped target network.target - Network. Jun 20 19:13:12.462105 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:13:12.462161 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:13:12.467437 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:13:12.471400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:13:12.476012 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:12.484272 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:13:12.487347 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:13:12.488311 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:13:12.489908 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:13:12.492683 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:13:12.492716 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:13:12.498126 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:13:12.498190 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:13:12.504643 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:13:12.504691 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:13:12.510615 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:13:12.516205 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:13:12.522584 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:13:12.523275 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:13:12.524352 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:13:12.529703 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:13:12.529942 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:13:12.530052 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:13:12.534531 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:13:12.534752 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:13:12.534830 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:13:12.543440 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:13:12.545135 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:13:12.545169 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:12.549527 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:13:12.558301 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:13:12.558371 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:13:12.573523 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:13:12.574752 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:12.577420 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:13:12.577468 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:12.584438 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:13:12.584484 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:12.586857 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:12.595965 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:13:12.596028 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:13:12.603665 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:13:12.603804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:12.609685 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:13:12.609756 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:12.613576 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dcf37 eth0: Data path switched from VF: enP30832s1 Jun 20 19:13:12.615368 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:12.619462 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:13:12.619500 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:12.623417 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:13:12.623465 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:13:12.630427 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:13:12.630474 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:13:12.633164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:13:12.633209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:13:12.638221 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:13:12.639463 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:13:12.639516 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:12.642725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:13:12.642766 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:12.647783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:12.647827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:12.655068 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:13:12.655119 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:13:12.655157 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:13:12.655704 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:13:12.655844 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:13:12.661655 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:13:12.661741 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:13:12.710927 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:13:12.711047 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:13:12.712804 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:13:12.713126 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:13:12.713173 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:13:12.714467 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:13:12.731610 systemd[1]: Switching root. Jun 20 19:13:12.794058 systemd-journald[205]: Journal stopped Jun 20 19:13:16.652960 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jun 20 19:13:16.652990 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:13:16.653002 kernel: SELinux: policy capability open_perms=1 Jun 20 19:13:16.653011 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:13:16.653019 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:13:16.653028 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:13:16.653040 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:13:16.653050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:13:16.653059 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:13:16.653068 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:13:16.653078 kernel: audit: type=1403 audit(1750446794.095:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:13:16.653088 systemd[1]: Successfully loaded SELinux policy in 132.849ms. Jun 20 19:13:16.653100 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.363ms. Jun 20 19:13:16.653114 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:13:16.653125 systemd[1]: Detected virtualization microsoft. Jun 20 19:13:16.653135 systemd[1]: Detected architecture x86-64. Jun 20 19:13:16.653145 systemd[1]: Detected first boot. Jun 20 19:13:16.653155 systemd[1]: Hostname set to . Jun 20 19:13:16.653167 systemd[1]: Initializing machine ID from random generator. Jun 20 19:13:16.653177 zram_generator::config[1186]: No configuration found. Jun 20 19:13:16.653188 kernel: Guest personality initialized and is inactive Jun 20 19:13:16.653197 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 20 19:13:16.653206 kernel: Initialized host personality Jun 20 19:13:16.653215 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:13:16.653225 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:13:16.653238 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:13:16.653248 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:13:16.653258 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:13:16.653268 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:13:16.653278 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:13:16.653289 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:13:16.653300 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:13:16.653311 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:13:16.653321 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:13:16.653332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:13:16.653343 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:13:16.653353 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:13:16.653375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:16.653385 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:16.653395 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:13:16.653408 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:13:16.653422 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:13:16.653432 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:13:16.653443 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:13:16.653453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:16.653464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:16.653474 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:13:16.653484 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:13:16.653496 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:13:16.653507 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:13:16.653517 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:16.653527 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:13:16.653538 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:13:16.653548 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:13:16.653558 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:13:16.653567 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:13:16.653580 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:13:16.653590 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:16.653599 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:16.653609 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:16.653620 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:13:16.653631 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:13:16.653641 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:13:16.653651 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:13:16.653661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:16.653671 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:13:16.653681 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:13:16.653690 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:13:16.653701 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:13:16.653713 systemd[1]: Reached target machines.target - Containers. Jun 20 19:13:16.653723 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:13:16.653734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:16.653744 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:13:16.653754 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:13:16.653764 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:16.653774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:13:16.653785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:16.653796 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:13:16.653807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:16.653817 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:13:16.653828 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:13:16.653837 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:13:16.653848 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:13:16.653858 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:13:16.653869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:16.653880 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:13:16.653892 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:13:16.653902 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:13:16.653913 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:13:16.653924 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:13:16.653935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:13:16.653946 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:13:16.653957 systemd[1]: Stopped verity-setup.service. Jun 20 19:13:16.653968 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:16.653982 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:13:16.653992 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:13:16.654003 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:13:16.654013 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:13:16.654024 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:13:16.654034 kernel: fuse: init (API version 7.41) Jun 20 19:13:16.654044 kernel: loop: module loaded Jun 20 19:13:16.654054 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:13:16.654065 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:13:16.654078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:16.654089 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:13:16.654100 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:13:16.654111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:16.654123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:16.654160 systemd-journald[1279]: Collecting audit messages is disabled. Jun 20 19:13:16.654189 systemd-journald[1279]: Journal started Jun 20 19:13:16.654213 systemd-journald[1279]: Runtime Journal (/run/log/journal/e8a15d1bad7e49ec833e308ced762df8) is 8M, max 158.9M, 150.9M free. Jun 20 19:13:16.175870 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:13:16.188097 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 19:13:16.188509 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:13:16.662416 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:13:16.663555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:16.663810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:16.667461 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:13:16.667710 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:13:16.672677 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:16.672905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:16.676627 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:16.680273 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:16.684726 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:13:16.688276 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:13:16.701553 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:13:16.705445 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:13:16.708352 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:13:16.710916 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:13:16.710952 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:13:16.714772 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:13:16.721494 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:13:16.723394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:16.725985 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:13:16.730455 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:13:16.733427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:13:16.734413 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:13:16.738073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:13:16.739248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:13:16.744698 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:13:16.748560 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:13:16.762518 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:16.765693 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:13:16.768543 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:13:16.773660 systemd-journald[1279]: Time spent on flushing to /var/log/journal/e8a15d1bad7e49ec833e308ced762df8 is 36.114ms for 984 entries. Jun 20 19:13:16.773660 systemd-journald[1279]: System Journal (/var/log/journal/e8a15d1bad7e49ec833e308ced762df8) is 11.8M, max 2.6G, 2.6G free. Jun 20 19:13:16.870868 systemd-journald[1279]: Received client request to flush runtime journal. Jun 20 19:13:16.870916 kernel: ACPI: bus type drm_connector registered Jun 20 19:13:16.870935 systemd-journald[1279]: /var/log/journal/e8a15d1bad7e49ec833e308ced762df8/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 20 19:13:16.870955 systemd-journald[1279]: Rotating system journal. Jun 20 19:13:16.870972 kernel: loop0: detected capacity change from 0 to 28496 Jun 20 19:13:16.781004 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:13:16.784082 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:13:16.791611 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:13:16.807692 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:13:16.808441 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:13:16.856584 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:16.871810 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:13:16.881037 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:13:17.099193 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:13:17.103554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:13:17.188380 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:13:17.192502 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:13:17.242650 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jun 20 19:13:17.242667 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jun 20 19:13:17.246216 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:17.252421 kernel: loop1: detected capacity change from 0 to 113872 Jun 20 19:13:17.601532 kernel: loop2: detected capacity change from 0 to 224512 Jun 20 19:13:17.621378 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:13:17.625842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:17.655573 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Jun 20 19:13:17.667380 kernel: loop3: detected capacity change from 0 to 146240 Jun 20 19:13:17.850460 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:17.857950 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:13:17.894337 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:13:17.964505 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:13:18.034387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#260 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:18.055484 kernel: hv_vmbus: registering driver hv_balloon Jun 20 19:13:18.057788 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 19:13:18.060823 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:13:18.060494 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:13:18.079392 kernel: loop4: detected capacity change from 0 to 28496 Jun 20 19:13:18.112246 kernel: loop5: detected capacity change from 0 to 113872 Jun 20 19:13:18.120405 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 19:13:18.122396 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 19:13:18.122593 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 19:13:18.124445 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:13:18.129704 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:13:18.141382 kernel: loop6: detected capacity change from 0 to 224512 Jun 20 19:13:18.167391 kernel: loop7: detected capacity change from 0 to 146240 Jun 20 19:13:18.191562 (sd-merge)[1410]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 19:13:18.198399 (sd-merge)[1410]: Merged extensions into '/usr'. Jun 20 19:13:18.213454 systemd[1]: Reload requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:13:18.213470 systemd[1]: Reloading... Jun 20 19:13:18.352590 systemd-networkd[1363]: lo: Link UP Jun 20 19:13:18.352602 systemd-networkd[1363]: lo: Gained carrier Jun 20 19:13:18.357969 systemd-networkd[1363]: Enumeration completed Jun 20 19:13:18.358304 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:18.358314 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:18.367403 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:13:18.371391 zram_generator::config[1460]: No configuration found. Jun 20 19:13:18.373382 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:18.379387 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dcf37 eth0: Data path switched to VF: enP30832s1 Jun 20 19:13:18.380325 systemd-networkd[1363]: enP30832s1: Link UP Jun 20 19:13:18.383569 systemd-networkd[1363]: eth0: Link UP Jun 20 19:13:18.383580 systemd-networkd[1363]: eth0: Gained carrier Jun 20 19:13:18.383603 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:18.386664 systemd-networkd[1363]: enP30832s1: Gained carrier Jun 20 19:13:18.397598 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:18.474405 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 20 19:13:18.539987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:18.638057 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:13:18.640327 systemd[1]: Reloading finished in 426 ms. Jun 20 19:13:18.657923 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:13:18.659847 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:13:18.700715 systemd[1]: Starting ensure-sysext.service... Jun 20 19:13:18.704406 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:13:18.709334 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:13:18.714583 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:13:18.719696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:13:18.724834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:18.748323 systemd[1]: Reload requested from client PID 1522 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:13:18.748455 systemd[1]: Reloading... Jun 20 19:13:18.750879 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:13:18.750903 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:13:18.751111 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:13:18.751330 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:13:18.756151 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:13:18.757468 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jun 20 19:13:18.757548 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jun 20 19:13:18.777658 systemd-tmpfiles[1526]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:13:18.777667 systemd-tmpfiles[1526]: Skipping /boot Jun 20 19:13:18.787277 systemd-tmpfiles[1526]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:13:18.787404 systemd-tmpfiles[1526]: Skipping /boot Jun 20 19:13:18.830429 zram_generator::config[1563]: No configuration found. Jun 20 19:13:18.918185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:19.015917 systemd[1]: Reloading finished in 267 ms. Jun 20 19:13:19.036068 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:13:19.039275 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:13:19.041757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:19.050588 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:13:19.054651 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:13:19.065410 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:13:19.069590 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:13:19.073403 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:13:19.077757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:19.097133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:19.097602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:19.100201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:19.106556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:19.111609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:19.113826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:19.113949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:19.114047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:19.121509 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:13:19.129589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:19.129769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:19.132855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:19.133006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:19.145848 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:13:19.151046 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:19.151275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:19.163571 systemd[1]: Finished ensure-sysext.service. Jun 20 19:13:19.168052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:19.168248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:19.170609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:19.174018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:13:19.179280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:19.183742 systemd-resolved[1630]: Positive Trust Anchors: Jun 20 19:13:19.184477 systemd-resolved[1630]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:13:19.184815 systemd-resolved[1630]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:13:19.186523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:19.189628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:19.189672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:19.189736 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:13:19.192036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:19.192291 systemd-resolved[1630]: Using system hostname 'ci-4344.1.0-a-abe1e11a87'. Jun 20 19:13:19.194502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:19.198101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:19.201131 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:13:19.203965 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:13:19.204184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:13:19.206203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:19.206455 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:19.208290 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:19.208514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:19.211750 systemd[1]: Reached target network.target - Network. Jun 20 19:13:19.213588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:19.215544 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:13:19.215588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:13:19.278798 augenrules[1670]: No rules Jun 20 19:13:19.279952 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:13:19.280184 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:13:19.586423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:13:19.588286 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:13:20.040518 systemd-networkd[1363]: eth0: Gained IPv6LL Jun 20 19:13:20.042901 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:13:20.045337 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:13:20.168639 systemd-networkd[1363]: enP30832s1: Gained IPv6LL Jun 20 19:13:21.396585 ldconfig[1321]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:13:21.408573 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:13:21.419717 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:13:21.439155 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:13:21.440771 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:13:21.443683 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:13:21.445700 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:13:21.449433 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:13:21.451229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:13:21.455504 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:13:21.458406 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:13:21.460132 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:13:21.460168 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:13:21.461457 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:13:21.479682 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:13:21.482603 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:13:21.487539 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:13:21.490543 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:13:21.493413 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:13:21.496376 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:13:21.499765 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:13:21.503012 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:13:21.506135 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:13:21.507549 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:13:21.510457 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:13:21.510482 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:13:21.513177 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 19:13:21.517444 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:13:21.523688 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:13:21.529489 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:13:21.532309 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:13:21.535452 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:13:21.543220 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:13:21.545184 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:13:21.546871 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:13:21.550483 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 20 19:13:21.553520 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 19:13:21.555614 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 19:13:21.557937 jq[1688]: false Jun 20 19:13:21.558260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:13:21.563735 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:13:21.569551 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:13:21.572738 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:13:21.576483 KVP[1694]: KVP starting; pid is:1694 Jun 20 19:13:21.579539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:13:21.581968 KVP[1694]: KVP LIC Version: 3.1 Jun 20 19:13:21.582392 kernel: hv_utils: KVP IC version 4.0 Jun 20 19:13:21.584332 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:13:21.596827 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:13:21.599395 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:13:21.599857 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:13:21.602615 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:13:21.607741 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:13:21.621478 extend-filesystems[1689]: Found /dev/nvme0n1p6 Jun 20 19:13:21.621557 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:13:21.625322 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:13:21.625527 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:13:21.630722 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Refreshing passwd entry cache Jun 20 19:13:21.630734 oslogin_cache_refresh[1690]: Refreshing passwd entry cache Jun 20 19:13:21.645644 (chronyd)[1683]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 19:13:21.646648 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:13:21.646850 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:13:21.656219 jq[1707]: true Jun 20 19:13:21.656492 extend-filesystems[1689]: Found /dev/nvme0n1p9 Jun 20 19:13:21.665656 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Failure getting users, quitting Jun 20 19:13:21.665656 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:13:21.665656 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Refreshing group entry cache Jun 20 19:13:21.663618 oslogin_cache_refresh[1690]: Failure getting users, quitting Jun 20 19:13:21.663638 oslogin_cache_refresh[1690]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:13:21.663684 oslogin_cache_refresh[1690]: Refreshing group entry cache Jun 20 19:13:21.672659 extend-filesystems[1689]: Checking size of /dev/nvme0n1p9 Jun 20 19:13:21.674690 chronyd[1737]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 19:13:21.673217 (ntainerd)[1733]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:13:21.684847 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:13:21.687726 chronyd[1737]: Timezone right/UTC failed leap second check, ignoring Jun 20 19:13:21.687933 chronyd[1737]: Loaded seccomp filter (level 2) Jun 20 19:13:21.688265 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Failure getting groups, quitting Jun 20 19:13:21.688265 google_oslogin_nss_cache[1690]: oslogin_cache_refresh[1690]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:13:21.688078 oslogin_cache_refresh[1690]: Failure getting groups, quitting Jun 20 19:13:21.688094 oslogin_cache_refresh[1690]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:13:21.689875 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:13:21.691463 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:13:21.695378 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 19:13:21.697438 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:13:21.697659 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:13:21.698899 update_engine[1706]: I20250620 19:13:21.698822 1706 main.cc:92] Flatcar Update Engine starting Jun 20 19:13:21.708435 jq[1731]: true Jun 20 19:13:21.720019 extend-filesystems[1689]: Old size kept for /dev/nvme0n1p9 Jun 20 19:13:21.720526 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:13:21.720723 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:13:21.785630 tar[1713]: linux-amd64/LICENSE Jun 20 19:13:21.786783 tar[1713]: linux-amd64/helm Jun 20 19:13:21.796161 dbus-daemon[1686]: [system] SELinux support is enabled Jun 20 19:13:21.796337 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:13:21.803325 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:13:21.803373 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:13:21.807511 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:13:21.807534 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:13:21.811881 systemd-logind[1704]: New seat seat0. Jun 20 19:13:21.813102 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:13:21.820304 update_engine[1706]: I20250620 19:13:21.812999 1706 update_check_scheduler.cc:74] Next update check in 6m7s Jun 20 19:13:21.821284 systemd-logind[1704]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:13:21.843821 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:13:21.847600 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:13:21.858246 bash[1765]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:13:21.858149 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:13:21.862161 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:13:21.939979 coreos-metadata[1685]: Jun 20 19:13:21.939 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:13:21.940300 coreos-metadata[1685]: Jun 20 19:13:21.940 INFO Fetch successful Jun 20 19:13:21.940300 coreos-metadata[1685]: Jun 20 19:13:21.940 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 19:13:21.945066 coreos-metadata[1685]: Jun 20 19:13:21.945 INFO Fetch successful Jun 20 19:13:21.946604 coreos-metadata[1685]: Jun 20 19:13:21.946 INFO Fetching http://168.63.129.16/machine/819ab329-6e51-4ee7-b05d-b4e87820655b/cc975495%2Df35a%2D4614%2Dbe03%2Db0ad3c4fa45d.%5Fci%2D4344.1.0%2Da%2Dabe1e11a87?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 19:13:21.947040 coreos-metadata[1685]: Jun 20 19:13:21.947 INFO Fetch successful Jun 20 19:13:21.947208 coreos-metadata[1685]: Jun 20 19:13:21.947 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:13:21.967145 coreos-metadata[1685]: Jun 20 19:13:21.966 INFO Fetch successful Jun 20 19:13:22.013292 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:13:22.017827 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:13:22.121795 sshd_keygen[1724]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:13:22.153281 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:13:22.160327 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:13:22.164818 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 19:13:22.198955 locksmithd[1769]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:13:22.206498 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:13:22.206761 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:13:22.213745 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:13:22.228487 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 19:13:22.252372 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:13:22.258837 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:13:22.267160 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:13:22.270629 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:13:22.628051 tar[1713]: linux-amd64/README.md Jun 20 19:13:22.639819 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:13:22.657302 containerd[1733]: time="2025-06-20T19:13:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:13:22.658014 containerd[1733]: time="2025-06-20T19:13:22.657847432Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:13:22.666482 containerd[1733]: time="2025-06-20T19:13:22.666122812Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.238µs" Jun 20 19:13:22.666482 containerd[1733]: time="2025-06-20T19:13:22.666155372Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:13:22.666482 containerd[1733]: time="2025-06-20T19:13:22.666174526Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:13:22.666482 containerd[1733]: time="2025-06-20T19:13:22.666314018Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:13:22.666482 containerd[1733]: time="2025-06-20T19:13:22.666325654Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:13:22.666482 containerd[1733]: time="2025-06-20T19:13:22.666347600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666482 containerd[1733]: time="2025-06-20T19:13:22.666408858Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666482 containerd[1733]: time="2025-06-20T19:13:22.666419565Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666710 containerd[1733]: time="2025-06-20T19:13:22.666655785Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666710 containerd[1733]: time="2025-06-20T19:13:22.666669598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666710 containerd[1733]: time="2025-06-20T19:13:22.666687536Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666710 containerd[1733]: time="2025-06-20T19:13:22.666697343Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666793 containerd[1733]: time="2025-06-20T19:13:22.666761231Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666952 containerd[1733]: time="2025-06-20T19:13:22.666929566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666985 containerd[1733]: time="2025-06-20T19:13:22.666958915Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:13:22.666985 containerd[1733]: time="2025-06-20T19:13:22.666970819Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:13:22.667029 containerd[1733]: time="2025-06-20T19:13:22.667004321Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:13:22.667275 containerd[1733]: time="2025-06-20T19:13:22.667204731Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:13:22.667275 containerd[1733]: time="2025-06-20T19:13:22.667256564Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686604842Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686664703Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686683550Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686718510Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686735402Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686748882Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686783860Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686799328Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686812726Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686824539Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686834484Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686848750Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686968438Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:13:22.687530 containerd[1733]: time="2025-06-20T19:13:22.686988972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687004817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687017511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687028094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687038159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687049050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687060201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687071520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687081554Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687092492Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687172637Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687191186Z" level=info msg="Start snapshots syncer" Jun 20 19:13:22.687879 containerd[1733]: time="2025-06-20T19:13:22.687214087Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:13:22.688117 containerd[1733]: time="2025-06-20T19:13:22.687583984Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:13:22.688117 containerd[1733]: time="2025-06-20T19:13:22.687644708Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687727986Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687833366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687861761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687873108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687889796Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687904868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687916140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687938696Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687963534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687975135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.687985964Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.688020790Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.688036223Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:13:22.688248 containerd[1733]: time="2025-06-20T19:13:22.688047144Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688057784Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688066630Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688086947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688097947Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688118290Z" level=info msg="runtime interface created" Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688123841Z" level=info msg="created NRI interface" Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688132924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688143006Z" level=info msg="Connect containerd service" Jun 20 19:13:22.688527 containerd[1733]: time="2025-06-20T19:13:22.688178567Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:13:22.689837 containerd[1733]: time="2025-06-20T19:13:22.689078104Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:13:23.057702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:13:23.066684 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331586835Z" level=info msg="Start subscribing containerd event" Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331658999Z" level=info msg="Start recovering state" Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331754750Z" level=info msg="Start event monitor" Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331770340Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331777640Z" level=info msg="Start streaming server" Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331787231Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331795260Z" level=info msg="runtime interface starting up..." Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331802014Z" level=info msg="starting plugins..." Jun 20 19:13:23.331985 containerd[1733]: time="2025-06-20T19:13:23.331815453Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:13:23.332502 containerd[1733]: time="2025-06-20T19:13:23.332406611Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:13:23.332502 containerd[1733]: time="2025-06-20T19:13:23.332464411Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:13:23.332628 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:13:23.333962 containerd[1733]: time="2025-06-20T19:13:23.333753707Z" level=info msg="containerd successfully booted in 0.677081s" Jun 20 19:13:23.335137 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:13:23.338392 systemd[1]: Startup finished in 3.244s (kernel) + 11.387s (initrd) + 9.373s (userspace) = 24.006s. Jun 20 19:13:23.535861 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:13:23.538246 login[1832]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:13:23.556815 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:13:23.560848 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:13:23.576877 systemd-logind[1704]: New session 2 of user core. Jun 20 19:13:23.582899 systemd-logind[1704]: New session 1 of user core. Jun 20 19:13:23.589503 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:13:23.594507 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:13:23.604760 (systemd)[1869]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:13:23.607420 systemd-logind[1704]: New session c1 of user core. Jun 20 19:13:23.663924 waagent[1829]: 2025-06-20T19:13:23.661664Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 19:13:23.664306 waagent[1829]: 2025-06-20T19:13:23.664255Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 19:13:23.666456 waagent[1829]: 2025-06-20T19:13:23.666400Z INFO Daemon Daemon Python: 3.11.12 Jun 20 19:13:23.668341 waagent[1829]: 2025-06-20T19:13:23.668297Z INFO Daemon Daemon Run daemon Jun 20 19:13:23.674017 waagent[1829]: 2025-06-20T19:13:23.670505Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 19:13:23.674415 waagent[1829]: 2025-06-20T19:13:23.674367Z INFO Daemon Daemon Using waagent for provisioning Jun 20 19:13:23.677167 waagent[1829]: 2025-06-20T19:13:23.677126Z INFO Daemon Daemon Activate resource disk Jun 20 19:13:23.679510 waagent[1829]: 2025-06-20T19:13:23.679467Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 19:13:23.684499 waagent[1829]: 2025-06-20T19:13:23.684458Z INFO Daemon Daemon Found device: None Jun 20 19:13:23.686792 waagent[1829]: 2025-06-20T19:13:23.686747Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 19:13:23.690603 waagent[1829]: 2025-06-20T19:13:23.690562Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 19:13:23.698439 waagent[1829]: 2025-06-20T19:13:23.695891Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:13:23.698439 waagent[1829]: 2025-06-20T19:13:23.698301Z INFO Daemon Daemon Running default provisioning handler Jun 20 19:13:23.708668 waagent[1829]: 2025-06-20T19:13:23.708626Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 19:13:23.721883 waagent[1829]: 2025-06-20T19:13:23.714440Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 19:13:23.721883 waagent[1829]: 2025-06-20T19:13:23.714612Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 19:13:23.721883 waagent[1829]: 2025-06-20T19:13:23.714891Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 19:13:23.767092 kubelet[1851]: E0620 19:13:23.767018 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:13:23.771921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:13:23.772070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:13:23.774575 systemd[1]: kubelet.service: Consumed 1.008s CPU time, 264.3M memory peak. Jun 20 19:13:23.779455 waagent[1829]: 2025-06-20T19:13:23.777091Z INFO Daemon Daemon Successfully mounted dvd Jun 20 19:13:23.807772 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 19:13:23.813859 waagent[1829]: 2025-06-20T19:13:23.811770Z INFO Daemon Daemon Detect protocol endpoint Jun 20 19:13:23.814172 waagent[1829]: 2025-06-20T19:13:23.814137Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:13:23.816684 waagent[1829]: 2025-06-20T19:13:23.816654Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 19:13:23.819655 waagent[1829]: 2025-06-20T19:13:23.819629Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 19:13:23.822134 waagent[1829]: 2025-06-20T19:13:23.822108Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 19:13:23.824039 waagent[1829]: 2025-06-20T19:13:23.824009Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 19:13:23.834104 systemd[1869]: Queued start job for default target default.target. Jun 20 19:13:23.838343 waagent[1829]: 2025-06-20T19:13:23.838312Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 19:13:23.844684 waagent[1829]: 2025-06-20T19:13:23.839057Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 19:13:23.844684 waagent[1829]: 2025-06-20T19:13:23.839332Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 19:13:23.842211 systemd[1869]: Created slice app.slice - User Application Slice. Jun 20 19:13:23.842235 systemd[1869]: Reached target paths.target - Paths. Jun 20 19:13:23.842274 systemd[1869]: Reached target timers.target - Timers. Jun 20 19:13:23.844444 systemd[1869]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:13:23.853218 systemd[1869]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:13:23.853390 systemd[1869]: Reached target sockets.target - Sockets. Jun 20 19:13:23.853436 systemd[1869]: Reached target basic.target - Basic System. Jun 20 19:13:23.853513 systemd[1869]: Reached target default.target - Main User Target. Jun 20 19:13:23.853541 systemd[1869]: Startup finished in 238ms. Jun 20 19:13:23.853566 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:13:23.854863 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:13:23.855639 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:13:23.996624 waagent[1829]: 2025-06-20T19:13:23.996534Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 19:13:23.998175 waagent[1829]: 2025-06-20T19:13:23.998135Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 19:13:24.008993 waagent[1829]: 2025-06-20T19:13:24.008953Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:13:24.029392 waagent[1829]: 2025-06-20T19:13:24.029339Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 19:13:24.031148 waagent[1829]: 2025-06-20T19:13:24.031112Z INFO Daemon Jun 20 19:13:24.031825 waagent[1829]: 2025-06-20T19:13:24.031795Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 1ded72bd-295c-4cf5-b136-1b1997aeff11 eTag: 2396400270358486634 source: Fabric] Jun 20 19:13:24.034437 waagent[1829]: 2025-06-20T19:13:24.034409Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 19:13:24.036056 waagent[1829]: 2025-06-20T19:13:24.036028Z INFO Daemon Jun 20 19:13:24.036799 waagent[1829]: 2025-06-20T19:13:24.036731Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:13:24.051279 waagent[1829]: 2025-06-20T19:13:24.051235Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 19:13:24.228048 waagent[1829]: 2025-06-20T19:13:24.227978Z INFO Daemon Downloaded certificate {'thumbprint': '0DBB80063B6230C429D1ED71BF7B720A3D39FAD5', 'hasPrivateKey': True} Jun 20 19:13:24.230864 waagent[1829]: 2025-06-20T19:13:24.230825Z INFO Daemon Fetch goal state completed Jun 20 19:13:24.291314 waagent[1829]: 2025-06-20T19:13:24.291246Z INFO Daemon Daemon Starting provisioning Jun 20 19:13:24.292733 waagent[1829]: 2025-06-20T19:13:24.292645Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 19:13:24.293414 waagent[1829]: 2025-06-20T19:13:24.293388Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-abe1e11a87] Jun 20 19:13:24.312585 waagent[1829]: 2025-06-20T19:13:24.312530Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-abe1e11a87] Jun 20 19:13:24.314480 waagent[1829]: 2025-06-20T19:13:24.314436Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 19:13:24.316238 waagent[1829]: 2025-06-20T19:13:24.316205Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 19:13:24.324800 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:24.324810 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:24.324845 systemd-networkd[1363]: eth0: DHCP lease lost Jun 20 19:13:24.325912 waagent[1829]: 2025-06-20T19:13:24.325855Z INFO Daemon Daemon Create user account if not exists Jun 20 19:13:24.327269 waagent[1829]: 2025-06-20T19:13:24.327176Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 19:13:24.329919 waagent[1829]: 2025-06-20T19:13:24.327295Z INFO Daemon Daemon Configure sudoer Jun 20 19:13:24.333174 waagent[1829]: 2025-06-20T19:13:24.333128Z INFO Daemon Daemon Configure sshd Jun 20 19:13:24.337780 waagent[1829]: 2025-06-20T19:13:24.337735Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 19:13:24.341062 waagent[1829]: 2025-06-20T19:13:24.340629Z INFO Daemon Daemon Deploy ssh public key. Jun 20 19:13:24.354393 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:25.469755 waagent[1829]: 2025-06-20T19:13:25.469689Z INFO Daemon Daemon Provisioning complete Jun 20 19:13:25.480967 waagent[1829]: 2025-06-20T19:13:25.480931Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 19:13:25.481297 waagent[1829]: 2025-06-20T19:13:25.481164Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 19:13:25.486512 waagent[1829]: 2025-06-20T19:13:25.481336Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 19:13:25.586386 waagent[1919]: 2025-06-20T19:13:25.586272Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 19:13:25.586726 waagent[1919]: 2025-06-20T19:13:25.586421Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 19:13:25.586726 waagent[1919]: 2025-06-20T19:13:25.586464Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 19:13:25.586726 waagent[1919]: 2025-06-20T19:13:25.586504Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 20 19:13:25.624238 waagent[1919]: 2025-06-20T19:13:25.624161Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 19:13:25.624432 waagent[1919]: 2025-06-20T19:13:25.624390Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:25.624495 waagent[1919]: 2025-06-20T19:13:25.624467Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:25.634914 waagent[1919]: 2025-06-20T19:13:25.634845Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:13:25.642542 waagent[1919]: 2025-06-20T19:13:25.642504Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 19:13:25.642944 waagent[1919]: 2025-06-20T19:13:25.642915Z INFO ExtHandler Jun 20 19:13:25.642992 waagent[1919]: 2025-06-20T19:13:25.642974Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 07274503-1d38-472f-9c45-a0de3ce8762e eTag: 2396400270358486634 source: Fabric] Jun 20 19:13:25.643189 waagent[1919]: 2025-06-20T19:13:25.643165Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 19:13:25.643582 waagent[1919]: 2025-06-20T19:13:25.643554Z INFO ExtHandler Jun 20 19:13:25.643621 waagent[1919]: 2025-06-20T19:13:25.643601Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:13:25.646935 waagent[1919]: 2025-06-20T19:13:25.646902Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 19:13:25.712982 waagent[1919]: 2025-06-20T19:13:25.712915Z INFO ExtHandler Downloaded certificate {'thumbprint': '0DBB80063B6230C429D1ED71BF7B720A3D39FAD5', 'hasPrivateKey': True} Jun 20 19:13:25.713420 waagent[1919]: 2025-06-20T19:13:25.713349Z INFO ExtHandler Fetch goal state completed Jun 20 19:13:25.730201 waagent[1919]: 2025-06-20T19:13:25.730085Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 19:13:25.734768 waagent[1919]: 2025-06-20T19:13:25.734718Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1919 Jun 20 19:13:25.734897 waagent[1919]: 2025-06-20T19:13:25.734857Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 19:13:25.735137 waagent[1919]: 2025-06-20T19:13:25.735115Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 19:13:25.736221 waagent[1919]: 2025-06-20T19:13:25.736182Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 19:13:25.736571 waagent[1919]: 2025-06-20T19:13:25.736541Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 19:13:25.736691 waagent[1919]: 2025-06-20T19:13:25.736671Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 19:13:25.737090 waagent[1919]: 2025-06-20T19:13:25.737068Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 19:13:25.756982 waagent[1919]: 2025-06-20T19:13:25.756948Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 19:13:25.757151 waagent[1919]: 2025-06-20T19:13:25.757127Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 19:13:25.762555 waagent[1919]: 2025-06-20T19:13:25.762489Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 19:13:25.767822 systemd[1]: Reload requested from client PID 1934 ('systemctl') (unit waagent.service)... Jun 20 19:13:25.767835 systemd[1]: Reloading... Jun 20 19:13:25.852753 zram_generator::config[1975]: No configuration found. Jun 20 19:13:25.883390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#200 cmd 0xa1 status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:13:25.893782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#202 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:13:25.894065 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#207 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:13:25.909383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:13:25.919476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:13:25.929496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:13:25.939413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:13:25.951713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:26.049826 systemd[1]: Reloading finished in 281 ms. Jun 20 19:13:26.067944 waagent[1919]: 2025-06-20T19:13:26.067028Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 19:13:26.067944 waagent[1919]: 2025-06-20T19:13:26.067181Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 19:13:26.372746 waagent[1919]: 2025-06-20T19:13:26.372619Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 19:13:26.372985 waagent[1919]: 2025-06-20T19:13:26.372957Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 19:13:26.373670 waagent[1919]: 2025-06-20T19:13:26.373628Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 19:13:26.374121 waagent[1919]: 2025-06-20T19:13:26.374083Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 19:13:26.374310 waagent[1919]: 2025-06-20T19:13:26.374273Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:26.374424 waagent[1919]: 2025-06-20T19:13:26.374328Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:26.374460 waagent[1919]: 2025-06-20T19:13:26.374428Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 19:13:26.374530 waagent[1919]: 2025-06-20T19:13:26.374502Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:26.374747 waagent[1919]: 2025-06-20T19:13:26.374708Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:26.374958 waagent[1919]: 2025-06-20T19:13:26.374937Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 19:13:26.375069 waagent[1919]: 2025-06-20T19:13:26.375048Z INFO EnvHandler ExtHandler Configure routes Jun 20 19:13:26.375121 waagent[1919]: 2025-06-20T19:13:26.375085Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 19:13:26.375475 waagent[1919]: 2025-06-20T19:13:26.375447Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 19:13:26.375559 waagent[1919]: 2025-06-20T19:13:26.375526Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 19:13:26.375671 waagent[1919]: 2025-06-20T19:13:26.375637Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 19:13:26.375704 waagent[1919]: 2025-06-20T19:13:26.375688Z INFO EnvHandler ExtHandler Gateway:None Jun 20 19:13:26.375740 waagent[1919]: 2025-06-20T19:13:26.375724Z INFO EnvHandler ExtHandler Routes:None Jun 20 19:13:26.376148 waagent[1919]: 2025-06-20T19:13:26.376120Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 19:13:26.376148 waagent[1919]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 19:13:26.376148 waagent[1919]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 19:13:26.376148 waagent[1919]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 19:13:26.376148 waagent[1919]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:26.376148 waagent[1919]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:26.376148 waagent[1919]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:26.383091 waagent[1919]: 2025-06-20T19:13:26.383048Z INFO ExtHandler ExtHandler Jun 20 19:13:26.383166 waagent[1919]: 2025-06-20T19:13:26.383112Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3aeabf76-3b7f-4eb6-bfb7-936fab78cf78 correlation 6d04bf02-d702-4798-9437-10500a0322fc created: 2025-06-20T19:12:31.261208Z] Jun 20 19:13:26.383425 waagent[1919]: 2025-06-20T19:13:26.383400Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 19:13:26.383805 waagent[1919]: 2025-06-20T19:13:26.383784Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 20 19:13:26.416199 waagent[1919]: 2025-06-20T19:13:26.416068Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 19:13:26.416199 waagent[1919]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 19:13:26.416534 waagent[1919]: 2025-06-20T19:13:26.416503Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2884A19D-1045-4526-B473-126C1A49E9C8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 19:13:26.418946 waagent[1919]: 2025-06-20T19:13:26.418897Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 19:13:26.418946 waagent[1919]: Executing ['ip', '-a', '-o', 'link']: Jun 20 19:13:26.418946 waagent[1919]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 19:13:26.418946 waagent[1919]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1d:cf:37 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 20 19:13:26.418946 waagent[1919]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1d:cf:37 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jun 20 19:13:26.418946 waagent[1919]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 19:13:26.418946 waagent[1919]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 19:13:26.418946 waagent[1919]: 2: eth0 inet 10.200.4.34/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 19:13:26.418946 waagent[1919]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 19:13:26.418946 waagent[1919]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 19:13:26.418946 waagent[1919]: 2: eth0 inet6 fe80::7e1e:52ff:fe1d:cf37/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:13:26.418946 waagent[1919]: 3: enP30832s1 inet6 fe80::7e1e:52ff:fe1d:cf37/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:13:26.472406 waagent[1919]: 2025-06-20T19:13:26.472314Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 19:13:26.472406 waagent[1919]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:26.472406 waagent[1919]: pkts bytes target prot opt in out source destination Jun 20 19:13:26.472406 waagent[1919]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:26.472406 waagent[1919]: pkts bytes target prot opt in out source destination Jun 20 19:13:26.472406 waagent[1919]: Chain OUTPUT (policy ACCEPT 3 packets, 349 bytes) Jun 20 19:13:26.472406 waagent[1919]: pkts bytes target prot opt in out source destination Jun 20 19:13:26.472406 waagent[1919]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:13:26.472406 waagent[1919]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:13:26.472406 waagent[1919]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:13:26.475089 waagent[1919]: 2025-06-20T19:13:26.475035Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 19:13:26.475089 waagent[1919]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:26.475089 waagent[1919]: pkts bytes target prot opt in out source destination Jun 20 19:13:26.475089 waagent[1919]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:26.475089 waagent[1919]: pkts bytes target prot opt in out source destination Jun 20 19:13:26.475089 waagent[1919]: Chain OUTPUT (policy ACCEPT 3 packets, 349 bytes) Jun 20 19:13:26.475089 waagent[1919]: pkts bytes target prot opt in out source destination Jun 20 19:13:26.475089 waagent[1919]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:13:26.475089 waagent[1919]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:13:26.475089 waagent[1919]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:13:33.949410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:13:33.951313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:13:42.145767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:13:42.154581 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:13:42.186766 kubelet[2076]: E0620 19:13:42.186717 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:13:42.190090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:13:42.190230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:13:42.190627 systemd[1]: kubelet.service: Consumed 142ms CPU time, 110M memory peak. Jun 20 19:13:45.478731 chronyd[1737]: Selected source PHC0 Jun 20 19:13:52.199711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:13:52.201573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:13:52.235534 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:13:52.238038 systemd[1]: Started sshd@0-10.200.4.34:22-10.200.16.10:42314.service - OpenSSH per-connection server daemon (10.200.16.10:42314). Jun 20 19:13:52.804732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:13:52.810578 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:13:52.851663 kubelet[2094]: E0620 19:13:52.851611 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:13:52.853748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:13:52.853893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:13:52.854258 systemd[1]: kubelet.service: Consumed 138ms CPU time, 108.3M memory peak. Jun 20 19:13:53.211005 sshd[2087]: Accepted publickey for core from 10.200.16.10 port 42314 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:53.212428 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:53.217317 systemd-logind[1704]: New session 3 of user core. Jun 20 19:13:53.223505 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:13:53.742884 systemd[1]: Started sshd@1-10.200.4.34:22-10.200.16.10:48916.service - OpenSSH per-connection server daemon (10.200.16.10:48916). Jun 20 19:13:54.345724 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 48916 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:54.347099 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:54.351768 systemd-logind[1704]: New session 4 of user core. Jun 20 19:13:54.358520 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:13:54.768196 sshd[2106]: Connection closed by 10.200.16.10 port 48916 Jun 20 19:13:54.768929 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:54.772021 systemd[1]: sshd@1-10.200.4.34:22-10.200.16.10:48916.service: Deactivated successfully. Jun 20 19:13:54.773596 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:13:54.775233 systemd-logind[1704]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:13:54.776057 systemd-logind[1704]: Removed session 4. Jun 20 19:13:54.875670 systemd[1]: Started sshd@2-10.200.4.34:22-10.200.16.10:48924.service - OpenSSH per-connection server daemon (10.200.16.10:48924). Jun 20 19:13:55.476213 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 48924 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:55.477614 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:55.482258 systemd-logind[1704]: New session 5 of user core. Jun 20 19:13:55.488529 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:13:55.895998 sshd[2114]: Connection closed by 10.200.16.10 port 48924 Jun 20 19:13:55.896950 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:55.900569 systemd[1]: sshd@2-10.200.4.34:22-10.200.16.10:48924.service: Deactivated successfully. Jun 20 19:13:55.902121 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:13:55.902835 systemd-logind[1704]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:13:55.904177 systemd-logind[1704]: Removed session 5. Jun 20 19:13:56.008836 systemd[1]: Started sshd@3-10.200.4.34:22-10.200.16.10:48928.service - OpenSSH per-connection server daemon (10.200.16.10:48928). Jun 20 19:13:56.604384 sshd[2120]: Accepted publickey for core from 10.200.16.10 port 48928 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:56.605723 sshd-session[2120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:56.610066 systemd-logind[1704]: New session 6 of user core. Jun 20 19:13:56.617520 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:13:57.024051 sshd[2122]: Connection closed by 10.200.16.10 port 48928 Jun 20 19:13:57.024754 sshd-session[2120]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:57.027874 systemd[1]: sshd@3-10.200.4.34:22-10.200.16.10:48928.service: Deactivated successfully. Jun 20 19:13:57.029538 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:13:57.030928 systemd-logind[1704]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:13:57.032119 systemd-logind[1704]: Removed session 6. Jun 20 19:13:57.132494 systemd[1]: Started sshd@4-10.200.4.34:22-10.200.16.10:48942.service - OpenSSH per-connection server daemon (10.200.16.10:48942). Jun 20 19:13:57.726948 sshd[2128]: Accepted publickey for core from 10.200.16.10 port 48942 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:57.728294 sshd-session[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:57.733108 systemd-logind[1704]: New session 7 of user core. Jun 20 19:13:57.738532 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:13:58.163452 sudo[2131]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:13:58.163688 sudo[2131]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:13:58.176531 sudo[2131]: pam_unix(sudo:session): session closed for user root Jun 20 19:13:58.278769 sshd[2130]: Connection closed by 10.200.16.10 port 48942 Jun 20 19:13:58.279687 sshd-session[2128]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:58.283200 systemd[1]: sshd@4-10.200.4.34:22-10.200.16.10:48942.service: Deactivated successfully. Jun 20 19:13:58.284927 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:13:58.286742 systemd-logind[1704]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:13:58.287666 systemd-logind[1704]: Removed session 7. Jun 20 19:13:58.392049 systemd[1]: Started sshd@5-10.200.4.34:22-10.200.16.10:48954.service - OpenSSH per-connection server daemon (10.200.16.10:48954). Jun 20 19:13:58.993948 sshd[2137]: Accepted publickey for core from 10.200.16.10 port 48954 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:13:58.995405 sshd-session[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:59.000290 systemd-logind[1704]: New session 8 of user core. Jun 20 19:13:59.009507 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:13:59.324123 sudo[2141]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:13:59.324353 sudo[2141]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:13:59.331082 sudo[2141]: pam_unix(sudo:session): session closed for user root Jun 20 19:13:59.335218 sudo[2140]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:13:59.335470 sudo[2140]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:13:59.343550 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:13:59.376422 augenrules[2163]: No rules Jun 20 19:13:59.377467 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:13:59.377680 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:13:59.378774 sudo[2140]: pam_unix(sudo:session): session closed for user root Jun 20 19:13:59.474531 sshd[2139]: Connection closed by 10.200.16.10 port 48954 Jun 20 19:13:59.475195 sshd-session[2137]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:59.478706 systemd[1]: sshd@5-10.200.4.34:22-10.200.16.10:48954.service: Deactivated successfully. Jun 20 19:13:59.480217 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:13:59.480994 systemd-logind[1704]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:13:59.482188 systemd-logind[1704]: Removed session 8. Jun 20 19:13:59.581096 systemd[1]: Started sshd@6-10.200.4.34:22-10.200.16.10:54840.service - OpenSSH per-connection server daemon (10.200.16.10:54840). Jun 20 19:14:00.184376 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 54840 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:14:00.185742 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:14:00.190513 systemd-logind[1704]: New session 9 of user core. Jun 20 19:14:00.199534 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:14:00.512966 sudo[2175]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:14:00.513197 sudo[2175]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:14:02.035038 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:14:02.044726 (dockerd)[2194]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:14:02.949220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:14:02.953507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:02.968193 dockerd[2194]: time="2025-06-20T19:14:02.968141926Z" level=info msg="Starting up" Jun 20 19:14:02.969216 dockerd[2194]: time="2025-06-20T19:14:02.969179097Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:14:03.024263 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport422942823-merged.mount: Deactivated successfully. Jun 20 19:14:05.866346 systemd[1]: var-lib-docker-metacopy\x2dcheck1014338058-merged.mount: Deactivated successfully. Jun 20 19:14:06.175697 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 20 19:14:07.253714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:07.257396 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:07.292207 kubelet[2222]: E0620 19:14:07.292147 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:07.294096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:07.294234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:07.294610 systemd[1]: kubelet.service: Consumed 146ms CPU time, 110.2M memory peak. Jun 20 19:14:07.339281 update_engine[1706]: I20250620 19:14:07.339211 1706 update_attempter.cc:509] Updating boot flags... Jun 20 19:14:08.049827 dockerd[2194]: time="2025-06-20T19:14:08.049115656Z" level=info msg="Loading containers: start." Jun 20 19:14:08.219497 kernel: Initializing XFRM netlink socket Jun 20 19:14:08.561651 systemd-networkd[1363]: docker0: Link UP Jun 20 19:14:08.583435 dockerd[2194]: time="2025-06-20T19:14:08.583387156Z" level=info msg="Loading containers: done." Jun 20 19:14:09.456867 dockerd[2194]: time="2025-06-20T19:14:09.456800162Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:14:09.457353 dockerd[2194]: time="2025-06-20T19:14:09.456917783Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:14:09.457353 dockerd[2194]: time="2025-06-20T19:14:09.457059935Z" level=info msg="Initializing buildkit" Jun 20 19:14:09.664577 dockerd[2194]: time="2025-06-20T19:14:09.664512215Z" level=info msg="Completed buildkit initialization" Jun 20 19:14:09.671301 dockerd[2194]: time="2025-06-20T19:14:09.671257731Z" level=info msg="Daemon has completed initialization" Jun 20 19:14:09.671301 dockerd[2194]: time="2025-06-20T19:14:09.671320014Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:14:09.671792 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:14:10.992201 containerd[1733]: time="2025-06-20T19:14:10.992156593Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:14:11.787312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923879567.mount: Deactivated successfully. Jun 20 19:14:17.449341 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:14:17.452568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:19.332900 containerd[1733]: time="2025-06-20T19:14:19.332853623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.336171 containerd[1733]: time="2025-06-20T19:14:19.336143379Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jun 20 19:14:19.338339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:19.339944 containerd[1733]: time="2025-06-20T19:14:19.339904647Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.344479 containerd[1733]: time="2025-06-20T19:14:19.344448043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.345264 containerd[1733]: time="2025-06-20T19:14:19.345238418Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 8.353039444s" Jun 20 19:14:19.345313 containerd[1733]: time="2025-06-20T19:14:19.345278427Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:14:19.345872 containerd[1733]: time="2025-06-20T19:14:19.345852628Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:14:19.346660 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:19.382772 kubelet[2514]: E0620 19:14:19.382705 2514 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:19.384534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:19.384704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:19.385054 systemd[1]: kubelet.service: Consumed 145ms CPU time, 110.5M memory peak. Jun 20 19:14:20.646621 containerd[1733]: time="2025-06-20T19:14:20.646572386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:20.649015 containerd[1733]: time="2025-06-20T19:14:20.648985525Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jun 20 19:14:20.651522 containerd[1733]: time="2025-06-20T19:14:20.651491546Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:20.655295 containerd[1733]: time="2025-06-20T19:14:20.655246950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:20.656301 containerd[1733]: time="2025-06-20T19:14:20.655867358Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.309983487s" Jun 20 19:14:20.656301 containerd[1733]: time="2025-06-20T19:14:20.655899975Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:14:20.656548 containerd[1733]: time="2025-06-20T19:14:20.656520115Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:14:21.744908 containerd[1733]: time="2025-06-20T19:14:21.744856241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:21.747241 containerd[1733]: time="2025-06-20T19:14:21.747200933Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jun 20 19:14:21.751112 containerd[1733]: time="2025-06-20T19:14:21.751061073Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:21.756448 containerd[1733]: time="2025-06-20T19:14:21.756401298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:21.757109 containerd[1733]: time="2025-06-20T19:14:21.756986463Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.100441398s" Jun 20 19:14:21.757109 containerd[1733]: time="2025-06-20T19:14:21.757015457Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:14:21.757734 containerd[1733]: time="2025-06-20T19:14:21.757706914Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:14:22.770799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260709712.mount: Deactivated successfully. Jun 20 19:14:23.128003 containerd[1733]: time="2025-06-20T19:14:23.127880264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:23.130347 containerd[1733]: time="2025-06-20T19:14:23.130309261Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jun 20 19:14:23.133182 containerd[1733]: time="2025-06-20T19:14:23.133126954Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:23.136398 containerd[1733]: time="2025-06-20T19:14:23.136340181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:23.136719 containerd[1733]: time="2025-06-20T19:14:23.136657417Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.378906628s" Jun 20 19:14:23.136719 containerd[1733]: time="2025-06-20T19:14:23.136691746Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:14:23.137210 containerd[1733]: time="2025-06-20T19:14:23.137187547Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:14:23.810079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657709381.mount: Deactivated successfully. Jun 20 19:14:24.709193 containerd[1733]: time="2025-06-20T19:14:24.709141695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:24.711695 containerd[1733]: time="2025-06-20T19:14:24.711655010Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 20 19:14:24.714642 containerd[1733]: time="2025-06-20T19:14:24.714617041Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:24.718603 containerd[1733]: time="2025-06-20T19:14:24.718553904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:24.719728 containerd[1733]: time="2025-06-20T19:14:24.719252381Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.581962788s" Jun 20 19:14:24.719728 containerd[1733]: time="2025-06-20T19:14:24.719284830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:14:24.720003 containerd[1733]: time="2025-06-20T19:14:24.719970607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:14:25.330934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606316013.mount: Deactivated successfully. Jun 20 19:14:25.350416 containerd[1733]: time="2025-06-20T19:14:25.350328470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:25.352751 containerd[1733]: time="2025-06-20T19:14:25.352717644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:14:25.355720 containerd[1733]: time="2025-06-20T19:14:25.355667252Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:25.360437 containerd[1733]: time="2025-06-20T19:14:25.360390814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:25.361084 containerd[1733]: time="2025-06-20T19:14:25.360797210Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 640.787892ms" Jun 20 19:14:25.361084 containerd[1733]: time="2025-06-20T19:14:25.360827784Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:14:25.361435 containerd[1733]: time="2025-06-20T19:14:25.361405724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:14:26.042897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296753680.mount: Deactivated successfully. Jun 20 19:14:27.700848 containerd[1733]: time="2025-06-20T19:14:27.700797257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:27.703405 containerd[1733]: time="2025-06-20T19:14:27.703374089Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jun 20 19:14:27.706466 containerd[1733]: time="2025-06-20T19:14:27.706426408Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:27.710659 containerd[1733]: time="2025-06-20T19:14:27.710610662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:27.711430 containerd[1733]: time="2025-06-20T19:14:27.711256268Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.349801885s" Jun 20 19:14:27.711430 containerd[1733]: time="2025-06-20T19:14:27.711289877Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:14:29.449257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 19:14:29.452572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:29.903517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:29.912595 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:29.960080 kubelet[2675]: E0620 19:14:29.960038 2675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:29.963247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:29.963408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:29.964056 systemd[1]: kubelet.service: Consumed 160ms CPU time, 110.4M memory peak. Jun 20 19:14:29.974419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:29.974737 systemd[1]: kubelet.service: Consumed 160ms CPU time, 110.4M memory peak. Jun 20 19:14:29.977003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:29.998999 systemd[1]: Reload requested from client PID 2689 ('systemctl') (unit session-9.scope)... Jun 20 19:14:29.999013 systemd[1]: Reloading... Jun 20 19:14:30.088427 zram_generator::config[2734]: No configuration found. Jun 20 19:14:30.183590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:30.291908 systemd[1]: Reloading finished in 292 ms. Jun 20 19:14:30.412205 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:14:30.412308 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:14:30.412644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:30.412704 systemd[1]: kubelet.service: Consumed 76ms CPU time, 74.3M memory peak. Jun 20 19:14:30.414901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:30.942711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:30.950645 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:14:30.985928 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:30.985928 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:14:30.985928 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:30.986301 kubelet[2801]: I0620 19:14:30.985983 2801 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:14:31.331624 kubelet[2801]: I0620 19:14:31.331585 2801 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:14:31.331624 kubelet[2801]: I0620 19:14:31.331613 2801 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:14:31.331944 kubelet[2801]: I0620 19:14:31.331933 2801 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:14:31.366309 kubelet[2801]: E0620 19:14:31.366263 2801 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:31.368031 kubelet[2801]: I0620 19:14:31.367891 2801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:14:31.374120 kubelet[2801]: I0620 19:14:31.374095 2801 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:14:31.376376 kubelet[2801]: I0620 19:14:31.376336 2801 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:14:31.377624 kubelet[2801]: I0620 19:14:31.377588 2801 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:14:31.377797 kubelet[2801]: I0620 19:14:31.377624 2801 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-abe1e11a87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:14:31.377932 kubelet[2801]: I0620 19:14:31.377806 2801 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:14:31.377932 kubelet[2801]: I0620 19:14:31.377816 2801 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:14:31.377980 kubelet[2801]: I0620 19:14:31.377963 2801 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:31.381421 kubelet[2801]: I0620 19:14:31.381405 2801 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:14:31.381487 kubelet[2801]: I0620 19:14:31.381436 2801 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:14:31.381487 kubelet[2801]: I0620 19:14:31.381462 2801 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:14:31.381487 kubelet[2801]: I0620 19:14:31.381474 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:14:31.390124 kubelet[2801]: W0620 19:14:31.389973 2801 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Jun 20 19:14:31.390124 kubelet[2801]: E0620 19:14:31.390035 2801 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:31.392093 kubelet[2801]: W0620 19:14:31.392052 2801 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-abe1e11a87&limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Jun 20 19:14:31.392210 kubelet[2801]: E0620 19:14:31.392195 2801 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-abe1e11a87&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:31.392342 kubelet[2801]: I0620 19:14:31.392333 2801 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:14:31.392920 kubelet[2801]: I0620 19:14:31.392909 2801 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:14:31.393016 kubelet[2801]: W0620 19:14:31.393004 2801 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:14:31.397033 kubelet[2801]: I0620 19:14:31.397006 2801 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:14:31.397107 kubelet[2801]: I0620 19:14:31.397061 2801 server.go:1287] "Started kubelet" Jun 20 19:14:31.400401 kubelet[2801]: I0620 19:14:31.400337 2801 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:14:31.401630 kubelet[2801]: I0620 19:14:31.401147 2801 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:14:31.402128 kubelet[2801]: I0620 19:14:31.402077 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:14:31.402377 kubelet[2801]: I0620 19:14:31.402346 2801 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:14:31.403829 kubelet[2801]: I0620 19:14:31.403815 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:14:31.404084 kubelet[2801]: E0620 19:14:31.402604 2801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-abe1e11a87.184ad627bfd05efb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-abe1e11a87,UID:ci-4344.1.0-a-abe1e11a87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-abe1e11a87,},FirstTimestamp:2025-06-20 19:14:31.397031675 +0000 UTC m=+0.443093505,LastTimestamp:2025-06-20 19:14:31.397031675 +0000 UTC m=+0.443093505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-abe1e11a87,}" Jun 20 19:14:31.404877 kubelet[2801]: I0620 19:14:31.404857 2801 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:14:31.408616 kubelet[2801]: E0620 19:14:31.408581 2801 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" Jun 20 19:14:31.408683 kubelet[2801]: I0620 19:14:31.408636 2801 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:14:31.408874 kubelet[2801]: I0620 19:14:31.408810 2801 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:14:31.408874 kubelet[2801]: I0620 19:14:31.408856 2801 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:14:31.409204 kubelet[2801]: W0620 19:14:31.409165 2801 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Jun 20 19:14:31.409254 kubelet[2801]: E0620 19:14:31.409216 2801 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:31.409502 kubelet[2801]: E0620 19:14:31.409444 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-abe1e11a87?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="200ms" Jun 20 19:14:31.410197 kubelet[2801]: I0620 19:14:31.410172 2801 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:14:31.410346 kubelet[2801]: I0620 19:14:31.410293 2801 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:14:31.410803 kubelet[2801]: E0620 19:14:31.410789 2801 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:14:31.411630 kubelet[2801]: I0620 19:14:31.411530 2801 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:14:31.433664 kubelet[2801]: I0620 19:14:31.433604 2801 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:14:31.433664 kubelet[2801]: I0620 19:14:31.433617 2801 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:14:31.433664 kubelet[2801]: I0620 19:14:31.433635 2801 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:31.436650 kubelet[2801]: I0620 19:14:31.436603 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:14:31.438716 kubelet[2801]: I0620 19:14:31.438420 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:14:31.438716 kubelet[2801]: I0620 19:14:31.438446 2801 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:14:31.438716 kubelet[2801]: I0620 19:14:31.438467 2801 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:14:31.438716 kubelet[2801]: I0620 19:14:31.438481 2801 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:14:31.438716 kubelet[2801]: E0620 19:14:31.438522 2801 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:14:31.439194 kubelet[2801]: I0620 19:14:31.439183 2801 policy_none.go:49] "None policy: Start" Jun 20 19:14:31.439322 kubelet[2801]: I0620 19:14:31.439314 2801 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:14:31.439434 kubelet[2801]: I0620 19:14:31.439427 2801 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:14:31.440011 kubelet[2801]: W0620 19:14:31.439992 2801 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Jun 20 19:14:31.440460 kubelet[2801]: E0620 19:14:31.440423 2801 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:31.449572 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:14:31.463667 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:14:31.466310 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:14:31.475068 kubelet[2801]: I0620 19:14:31.474997 2801 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:14:31.475185 kubelet[2801]: I0620 19:14:31.475175 2801 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:14:31.475309 kubelet[2801]: I0620 19:14:31.475191 2801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:14:31.476727 kubelet[2801]: I0620 19:14:31.475672 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:14:31.477453 kubelet[2801]: E0620 19:14:31.477429 2801 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:14:31.477523 kubelet[2801]: E0620 19:14:31.477475 2801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-abe1e11a87\" not found" Jun 20 19:14:31.552026 systemd[1]: Created slice kubepods-burstable-pod7a20f50d3f8ab0c218966d7edab61303.slice - libcontainer container kubepods-burstable-pod7a20f50d3f8ab0c218966d7edab61303.slice. Jun 20 19:14:31.567609 kubelet[2801]: E0620 19:14:31.567576 2801 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.569767 systemd[1]: Created slice kubepods-burstable-podb8d0190a008248379216776b95e85015.slice - libcontainer container kubepods-burstable-podb8d0190a008248379216776b95e85015.slice. Jun 20 19:14:31.577247 kubelet[2801]: I0620 19:14:31.577228 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.577674 kubelet[2801]: E0620 19:14:31.577650 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.578513 kubelet[2801]: E0620 19:14:31.578490 2801 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.581125 systemd[1]: Created slice kubepods-burstable-podb2e8154ee087dbb1e780993c2f12c24e.slice - libcontainer container kubepods-burstable-podb2e8154ee087dbb1e780993c2f12c24e.slice. Jun 20 19:14:31.582760 kubelet[2801]: E0620 19:14:31.582682 2801 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610232 kubelet[2801]: I0620 19:14:31.610001 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610232 kubelet[2801]: I0620 19:14:31.610037 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2e8154ee087dbb1e780993c2f12c24e-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-abe1e11a87\" (UID: \"b2e8154ee087dbb1e780993c2f12c24e\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610232 kubelet[2801]: I0620 19:14:31.610059 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a20f50d3f8ab0c218966d7edab61303-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-abe1e11a87\" (UID: \"7a20f50d3f8ab0c218966d7edab61303\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610232 kubelet[2801]: I0620 19:14:31.610076 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610232 kubelet[2801]: I0620 19:14:31.610098 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610450 kubelet[2801]: I0620 19:14:31.610115 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610450 kubelet[2801]: I0620 19:14:31.610132 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a20f50d3f8ab0c218966d7edab61303-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-abe1e11a87\" (UID: \"7a20f50d3f8ab0c218966d7edab61303\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610450 kubelet[2801]: E0620 19:14:31.610126 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-abe1e11a87?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="400ms" Jun 20 19:14:31.610450 kubelet[2801]: I0620 19:14:31.610149 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a20f50d3f8ab0c218966d7edab61303-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-abe1e11a87\" (UID: \"7a20f50d3f8ab0c218966d7edab61303\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.610450 kubelet[2801]: I0620 19:14:31.610177 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.780320 kubelet[2801]: I0620 19:14:31.780290 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.780690 kubelet[2801]: E0620 19:14:31.780670 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:31.869042 containerd[1733]: time="2025-06-20T19:14:31.868909018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-abe1e11a87,Uid:7a20f50d3f8ab0c218966d7edab61303,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:31.879611 containerd[1733]: time="2025-06-20T19:14:31.879569973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-abe1e11a87,Uid:b8d0190a008248379216776b95e85015,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:31.883312 containerd[1733]: time="2025-06-20T19:14:31.883280265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-abe1e11a87,Uid:b2e8154ee087dbb1e780993c2f12c24e,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:32.010991 kubelet[2801]: E0620 19:14:32.010940 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-abe1e11a87?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="800ms" Jun 20 19:14:32.183274 kubelet[2801]: I0620 19:14:32.183180 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:32.183629 kubelet[2801]: E0620 19:14:32.183586 2801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:32.265869 kubelet[2801]: W0620 19:14:32.265628 2801 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Jun 20 19:14:32.265869 kubelet[2801]: E0620 19:14:32.265716 2801 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:32.487333 kubelet[2801]: W0620 19:14:32.487262 2801 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-abe1e11a87&limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Jun 20 19:14:32.487333 kubelet[2801]: E0620 19:14:32.487340 2801 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-abe1e11a87&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:32.514013 containerd[1733]: time="2025-06-20T19:14:32.513919892Z" level=info msg="connecting to shim 06d8783b9099e4ce7bce5728f67f6e745525baf29c6138f9d9b34d0214b1235d" address="unix:///run/containerd/s/d1cdf97999b351cde723a948f31ab0bdeb8fd59c2d13d312691a57794987eaf2" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:32.541689 containerd[1733]: time="2025-06-20T19:14:32.541093880Z" level=info msg="connecting to shim 9047c935933f6c22abbde378c7ade3c32f2514a29629f7357f07a5b185708ee8" address="unix:///run/containerd/s/eb436592a3adf5d771adeb71e6e14628b1d322be51a9e2a42b526080f9c9ae21" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:32.547380 containerd[1733]: time="2025-06-20T19:14:32.547319067Z" level=info msg="connecting to shim 6a7c03c0a937b3ff21625ecc96fc02a10ee80d6d12b84c2439c037d2753b7145" address="unix:///run/containerd/s/d4c8f84dcdca78e81407a3512b4f1f3712927405bc08ea4ec2fb9b0dbc923d62" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:32.560753 systemd[1]: Started cri-containerd-06d8783b9099e4ce7bce5728f67f6e745525baf29c6138f9d9b34d0214b1235d.scope - libcontainer container 06d8783b9099e4ce7bce5728f67f6e745525baf29c6138f9d9b34d0214b1235d. Jun 20 19:14:32.593512 systemd[1]: Started cri-containerd-6a7c03c0a937b3ff21625ecc96fc02a10ee80d6d12b84c2439c037d2753b7145.scope - libcontainer container 6a7c03c0a937b3ff21625ecc96fc02a10ee80d6d12b84c2439c037d2753b7145. Jun 20 19:14:32.595438 systemd[1]: Started cri-containerd-9047c935933f6c22abbde378c7ade3c32f2514a29629f7357f07a5b185708ee8.scope - libcontainer container 9047c935933f6c22abbde378c7ade3c32f2514a29629f7357f07a5b185708ee8. Jun 20 19:14:32.632400 containerd[1733]: time="2025-06-20T19:14:32.632116471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-abe1e11a87,Uid:b2e8154ee087dbb1e780993c2f12c24e,Namespace:kube-system,Attempt:0,} returns sandbox id \"06d8783b9099e4ce7bce5728f67f6e745525baf29c6138f9d9b34d0214b1235d\"" Jun 20 19:14:32.638641 containerd[1733]: time="2025-06-20T19:14:32.638595878Z" level=info msg="CreateContainer within sandbox \"06d8783b9099e4ce7bce5728f67f6e745525baf29c6138f9d9b34d0214b1235d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:14:32.664416 containerd[1733]: time="2025-06-20T19:14:32.664377102Z" level=info msg="Container 0ba4c90d22054e34dde3f80b47da1d587ec0904cfd62bed72c684a3be054e229: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:32.682712 containerd[1733]: time="2025-06-20T19:14:32.682674132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-abe1e11a87,Uid:7a20f50d3f8ab0c218966d7edab61303,Namespace:kube-system,Attempt:0,} returns sandbox id \"9047c935933f6c22abbde378c7ade3c32f2514a29629f7357f07a5b185708ee8\"" Jun 20 19:14:32.684935 containerd[1733]: time="2025-06-20T19:14:32.684834039Z" level=info msg="CreateContainer within sandbox \"9047c935933f6c22abbde378c7ade3c32f2514a29629f7357f07a5b185708ee8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:14:32.686378 containerd[1733]: time="2025-06-20T19:14:32.686336622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-abe1e11a87,Uid:b8d0190a008248379216776b95e85015,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a7c03c0a937b3ff21625ecc96fc02a10ee80d6d12b84c2439c037d2753b7145\"" Jun 20 19:14:32.687375 containerd[1733]: time="2025-06-20T19:14:32.687172713Z" level=info msg="CreateContainer within sandbox \"06d8783b9099e4ce7bce5728f67f6e745525baf29c6138f9d9b34d0214b1235d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ba4c90d22054e34dde3f80b47da1d587ec0904cfd62bed72c684a3be054e229\"" Jun 20 19:14:32.692428 containerd[1733]: time="2025-06-20T19:14:32.687703153Z" level=info msg="StartContainer for \"0ba4c90d22054e34dde3f80b47da1d587ec0904cfd62bed72c684a3be054e229\"" Jun 20 19:14:32.692838 containerd[1733]: time="2025-06-20T19:14:32.692816371Z" level=info msg="connecting to shim 0ba4c90d22054e34dde3f80b47da1d587ec0904cfd62bed72c684a3be054e229" address="unix:///run/containerd/s/d1cdf97999b351cde723a948f31ab0bdeb8fd59c2d13d312691a57794987eaf2" protocol=ttrpc version=3 Jun 20 19:14:32.693333 containerd[1733]: time="2025-06-20T19:14:32.693308188Z" level=info msg="CreateContainer within sandbox \"6a7c03c0a937b3ff21625ecc96fc02a10ee80d6d12b84c2439c037d2753b7145\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:14:32.711527 systemd[1]: Started cri-containerd-0ba4c90d22054e34dde3f80b47da1d587ec0904cfd62bed72c684a3be054e229.scope - libcontainer container 0ba4c90d22054e34dde3f80b47da1d587ec0904cfd62bed72c684a3be054e229. Jun 20 19:14:32.715209 containerd[1733]: time="2025-06-20T19:14:32.715172698Z" level=info msg="Container 71a24c3b9eee02f9c9261019116f09b0e8f91e948cdc805e361dcd82bed34ada: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:32.721492 containerd[1733]: time="2025-06-20T19:14:32.721470076Z" level=info msg="Container 6bc3c4a10079e789deecb6e8287e1d2d420d11e2663d67697ba8fb4f4af581d8: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:32.734060 containerd[1733]: time="2025-06-20T19:14:32.734036360Z" level=info msg="CreateContainer within sandbox \"6a7c03c0a937b3ff21625ecc96fc02a10ee80d6d12b84c2439c037d2753b7145\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71a24c3b9eee02f9c9261019116f09b0e8f91e948cdc805e361dcd82bed34ada\"" Jun 20 19:14:32.734849 containerd[1733]: time="2025-06-20T19:14:32.734800444Z" level=info msg="StartContainer for \"71a24c3b9eee02f9c9261019116f09b0e8f91e948cdc805e361dcd82bed34ada\"" Jun 20 19:14:32.736153 containerd[1733]: time="2025-06-20T19:14:32.736096022Z" level=info msg="connecting to shim 71a24c3b9eee02f9c9261019116f09b0e8f91e948cdc805e361dcd82bed34ada" address="unix:///run/containerd/s/d4c8f84dcdca78e81407a3512b4f1f3712927405bc08ea4ec2fb9b0dbc923d62" protocol=ttrpc version=3 Jun 20 19:14:32.748415 containerd[1733]: time="2025-06-20T19:14:32.747435401Z" level=info msg="CreateContainer within sandbox \"9047c935933f6c22abbde378c7ade3c32f2514a29629f7357f07a5b185708ee8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6bc3c4a10079e789deecb6e8287e1d2d420d11e2663d67697ba8fb4f4af581d8\"" Jun 20 19:14:32.749805 containerd[1733]: time="2025-06-20T19:14:32.749666655Z" level=info msg="StartContainer for \"6bc3c4a10079e789deecb6e8287e1d2d420d11e2663d67697ba8fb4f4af581d8\"" Jun 20 19:14:32.751253 containerd[1733]: time="2025-06-20T19:14:32.751192911Z" level=info msg="connecting to shim 6bc3c4a10079e789deecb6e8287e1d2d420d11e2663d67697ba8fb4f4af581d8" address="unix:///run/containerd/s/eb436592a3adf5d771adeb71e6e14628b1d322be51a9e2a42b526080f9c9ae21" protocol=ttrpc version=3 Jun 20 19:14:32.760585 systemd[1]: Started cri-containerd-71a24c3b9eee02f9c9261019116f09b0e8f91e948cdc805e361dcd82bed34ada.scope - libcontainer container 71a24c3b9eee02f9c9261019116f09b0e8f91e948cdc805e361dcd82bed34ada. Jun 20 19:14:32.774390 containerd[1733]: time="2025-06-20T19:14:32.774345476Z" level=info msg="StartContainer for \"0ba4c90d22054e34dde3f80b47da1d587ec0904cfd62bed72c684a3be054e229\" returns successfully" Jun 20 19:14:32.785418 systemd[1]: Started cri-containerd-6bc3c4a10079e789deecb6e8287e1d2d420d11e2663d67697ba8fb4f4af581d8.scope - libcontainer container 6bc3c4a10079e789deecb6e8287e1d2d420d11e2663d67697ba8fb4f4af581d8. Jun 20 19:14:32.789609 kubelet[2801]: W0620 19:14:32.789553 2801 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Jun 20 19:14:32.789691 kubelet[2801]: E0620 19:14:32.789626 2801 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:32.800337 kubelet[2801]: W0620 19:14:32.800282 2801 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.34:6443: connect: connection refused Jun 20 19:14:32.800459 kubelet[2801]: E0620 19:14:32.800351 2801 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:14:32.811995 kubelet[2801]: E0620 19:14:32.811962 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-abe1e11a87?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="1.6s" Jun 20 19:14:32.859890 containerd[1733]: time="2025-06-20T19:14:32.859616242Z" level=info msg="StartContainer for \"71a24c3b9eee02f9c9261019116f09b0e8f91e948cdc805e361dcd82bed34ada\" returns successfully" Jun 20 19:14:32.879853 containerd[1733]: time="2025-06-20T19:14:32.879790873Z" level=info msg="StartContainer for \"6bc3c4a10079e789deecb6e8287e1d2d420d11e2663d67697ba8fb4f4af581d8\" returns successfully" Jun 20 19:14:32.987011 kubelet[2801]: I0620 19:14:32.986985 2801 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:33.452488 kubelet[2801]: E0620 19:14:33.451083 2801 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:33.456699 kubelet[2801]: E0620 19:14:33.455061 2801 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:33.458519 kubelet[2801]: E0620 19:14:33.458474 2801 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.461688 kubelet[2801]: E0620 19:14:34.461507 2801 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.462697 kubelet[2801]: E0620 19:14:34.462669 2801 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.529341 kubelet[2801]: E0620 19:14:34.529292 2801 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-abe1e11a87\" not found" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.653712 kubelet[2801]: I0620 19:14:34.653672 2801 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.653712 kubelet[2801]: E0620 19:14:34.653721 2801 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.0-a-abe1e11a87\": node \"ci-4344.1.0-a-abe1e11a87\" not found" Jun 20 19:14:34.709925 kubelet[2801]: I0620 19:14:34.709869 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.717178 kubelet[2801]: E0620 19:14:34.716602 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-abe1e11a87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.717178 kubelet[2801]: I0620 19:14:34.716636 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.718748 kubelet[2801]: E0620 19:14:34.718713 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.718748 kubelet[2801]: I0620 19:14:34.718743 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:34.720325 kubelet[2801]: E0620 19:14:34.720283 2801 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-abe1e11a87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:35.386438 kubelet[2801]: I0620 19:14:35.386395 2801 apiserver.go:52] "Watching apiserver" Jun 20 19:14:35.409244 kubelet[2801]: I0620 19:14:35.409191 2801 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:14:35.460401 kubelet[2801]: I0620 19:14:35.460341 2801 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:35.474761 kubelet[2801]: W0620 19:14:35.474711 2801 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:14:36.896191 systemd[1]: Reload requested from client PID 3071 ('systemctl') (unit session-9.scope)... Jun 20 19:14:36.896207 systemd[1]: Reloading... Jun 20 19:14:36.982458 zram_generator::config[3117]: No configuration found. Jun 20 19:14:37.070416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:37.187541 systemd[1]: Reloading finished in 291 ms. Jun 20 19:14:37.212090 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:37.223672 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:14:37.223849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:37.223894 systemd[1]: kubelet.service: Consumed 787ms CPU time, 129.6M memory peak. Jun 20 19:14:37.226089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:38.225189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:38.234808 (kubelet)[3184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:14:38.276944 kubelet[3184]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:38.276944 kubelet[3184]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:14:38.276944 kubelet[3184]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:38.278594 kubelet[3184]: I0620 19:14:38.277253 3184 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:14:38.286735 kubelet[3184]: I0620 19:14:38.286703 3184 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:14:38.286735 kubelet[3184]: I0620 19:14:38.286732 3184 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:14:38.287113 kubelet[3184]: I0620 19:14:38.286978 3184 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:14:38.288411 kubelet[3184]: I0620 19:14:38.288353 3184 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:14:38.290105 kubelet[3184]: I0620 19:14:38.290076 3184 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:14:38.294023 kubelet[3184]: I0620 19:14:38.294001 3184 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:14:38.297071 kubelet[3184]: I0620 19:14:38.296499 3184 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:14:38.297071 kubelet[3184]: I0620 19:14:38.296662 3184 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:14:38.297071 kubelet[3184]: I0620 19:14:38.296688 3184 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-abe1e11a87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:14:38.297071 kubelet[3184]: I0620 19:14:38.296872 3184 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:14:38.297300 kubelet[3184]: I0620 19:14:38.296881 3184 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:14:38.297300 kubelet[3184]: I0620 19:14:38.296928 3184 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:38.297350 kubelet[3184]: I0620 19:14:38.297336 3184 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:14:38.297422 kubelet[3184]: I0620 19:14:38.297407 3184 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:14:38.297449 kubelet[3184]: I0620 19:14:38.297432 3184 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:14:38.297449 kubelet[3184]: I0620 19:14:38.297445 3184 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:14:38.300390 kubelet[3184]: I0620 19:14:38.299544 3184 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:14:38.300390 kubelet[3184]: I0620 19:14:38.300242 3184 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:14:38.301728 kubelet[3184]: I0620 19:14:38.301703 3184 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:14:38.301840 kubelet[3184]: I0620 19:14:38.301833 3184 server.go:1287] "Started kubelet" Jun 20 19:14:38.311101 kubelet[3184]: I0620 19:14:38.311084 3184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:14:38.317158 kubelet[3184]: I0620 19:14:38.309733 3184 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:14:38.318843 kubelet[3184]: I0620 19:14:38.318737 3184 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:14:38.319072 kubelet[3184]: I0620 19:14:38.318951 3184 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:14:38.319237 kubelet[3184]: E0620 19:14:38.319141 3184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-abe1e11a87\" not found" Jun 20 19:14:38.320334 kubelet[3184]: I0620 19:14:38.319849 3184 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:14:38.322829 kubelet[3184]: I0620 19:14:38.322011 3184 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:14:38.322829 kubelet[3184]: I0620 19:14:38.322125 3184 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:14:38.323937 kubelet[3184]: I0620 19:14:38.323903 3184 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:14:38.325381 kubelet[3184]: I0620 19:14:38.324696 3184 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:14:38.325656 kubelet[3184]: I0620 19:14:38.325641 3184 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:14:38.326436 kubelet[3184]: I0620 19:14:38.324944 3184 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:14:38.326543 kubelet[3184]: I0620 19:14:38.326535 3184 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:14:38.326605 kubelet[3184]: I0620 19:14:38.326598 3184 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:14:38.326643 kubelet[3184]: I0620 19:14:38.326638 3184 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:14:38.326735 kubelet[3184]: E0620 19:14:38.326720 3184 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:14:38.349835 kubelet[3184]: I0620 19:14:38.348396 3184 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:14:38.349835 kubelet[3184]: I0620 19:14:38.348500 3184 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:14:38.354687 kubelet[3184]: I0620 19:14:38.354668 3184 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:14:38.358620 kubelet[3184]: E0620 19:14:38.357430 3184 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:14:38.412382 kubelet[3184]: I0620 19:14:38.412209 3184 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:14:38.412382 kubelet[3184]: I0620 19:14:38.412228 3184 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:14:38.412382 kubelet[3184]: I0620 19:14:38.412264 3184 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:38.412757 kubelet[3184]: I0620 19:14:38.412576 3184 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:14:38.412757 kubelet[3184]: I0620 19:14:38.412592 3184 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:14:38.412757 kubelet[3184]: I0620 19:14:38.412678 3184 policy_none.go:49] "None policy: Start" Jun 20 19:14:38.412757 kubelet[3184]: I0620 19:14:38.412705 3184 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:14:38.412757 kubelet[3184]: I0620 19:14:38.412717 3184 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:14:38.412893 kubelet[3184]: I0620 19:14:38.412878 3184 state_mem.go:75] "Updated machine memory state" Jun 20 19:14:38.420965 kubelet[3184]: I0620 19:14:38.420355 3184 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:14:38.422508 kubelet[3184]: I0620 19:14:38.421102 3184 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:14:38.422508 kubelet[3184]: I0620 19:14:38.421117 3184 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:14:38.425137 kubelet[3184]: I0620 19:14:38.423069 3184 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:14:38.427559 kubelet[3184]: E0620 19:14:38.427511 3184 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:14:38.428351 kubelet[3184]: I0620 19:14:38.428330 3184 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.428724 kubelet[3184]: I0620 19:14:38.428707 3184 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.428968 kubelet[3184]: I0620 19:14:38.428952 3184 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.429413 sudo[3216]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:14:38.429669 sudo[3216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:14:38.449382 kubelet[3184]: W0620 19:14:38.448673 3184 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:14:38.455336 kubelet[3184]: W0620 19:14:38.455278 3184 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:14:38.455735 kubelet[3184]: E0620 19:14:38.455657 3184 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-abe1e11a87\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.456150 kubelet[3184]: W0620 19:14:38.455554 3184 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:14:38.539624 kubelet[3184]: I0620 19:14:38.539203 3184 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.552622 kubelet[3184]: I0620 19:14:38.552586 3184 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.552907 kubelet[3184]: I0620 19:14:38.552809 3184 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.623316 kubelet[3184]: I0620 19:14:38.623195 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a20f50d3f8ab0c218966d7edab61303-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-abe1e11a87\" (UID: \"7a20f50d3f8ab0c218966d7edab61303\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.623565 kubelet[3184]: I0620 19:14:38.623399 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a20f50d3f8ab0c218966d7edab61303-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-abe1e11a87\" (UID: \"7a20f50d3f8ab0c218966d7edab61303\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.623565 kubelet[3184]: I0620 19:14:38.623432 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.623808 kubelet[3184]: I0620 19:14:38.623645 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.623808 kubelet[3184]: I0620 19:14:38.623672 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2e8154ee087dbb1e780993c2f12c24e-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-abe1e11a87\" (UID: \"b2e8154ee087dbb1e780993c2f12c24e\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.623808 kubelet[3184]: I0620 19:14:38.623715 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a20f50d3f8ab0c218966d7edab61303-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-abe1e11a87\" (UID: \"7a20f50d3f8ab0c218966d7edab61303\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.623808 kubelet[3184]: I0620 19:14:38.623732 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.624029 kubelet[3184]: I0620 19:14:38.623965 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.624029 kubelet[3184]: I0620 19:14:38.623989 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8d0190a008248379216776b95e85015-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-abe1e11a87\" (UID: \"b8d0190a008248379216776b95e85015\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:38.982495 sudo[3216]: pam_unix(sudo:session): session closed for user root Jun 20 19:14:39.310251 kubelet[3184]: I0620 19:14:39.310138 3184 apiserver.go:52] "Watching apiserver" Jun 20 19:14:39.323149 kubelet[3184]: I0620 19:14:39.323117 3184 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:14:39.386217 kubelet[3184]: I0620 19:14:39.384094 3184 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:39.396635 kubelet[3184]: W0620 19:14:39.396306 3184 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:14:39.397377 kubelet[3184]: E0620 19:14:39.397049 3184 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-abe1e11a87\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" Jun 20 19:14:39.430222 kubelet[3184]: I0620 19:14:39.430090 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-abe1e11a87" podStartSLOduration=4.430069474 podStartE2EDuration="4.430069474s" podCreationTimestamp="2025-06-20 19:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:39.419233993 +0000 UTC m=+1.180721254" watchObservedRunningTime="2025-06-20 19:14:39.430069474 +0000 UTC m=+1.191556748" Jun 20 19:14:39.440873 kubelet[3184]: I0620 19:14:39.440482 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-abe1e11a87" podStartSLOduration=1.44046588 podStartE2EDuration="1.44046588s" podCreationTimestamp="2025-06-20 19:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:39.430990255 +0000 UTC m=+1.192477522" watchObservedRunningTime="2025-06-20 19:14:39.44046588 +0000 UTC m=+1.201953143" Jun 20 19:14:39.462241 kubelet[3184]: I0620 19:14:39.462183 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-abe1e11a87" podStartSLOduration=1.462164335 podStartE2EDuration="1.462164335s" podCreationTimestamp="2025-06-20 19:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:39.442444698 +0000 UTC m=+1.203931969" watchObservedRunningTime="2025-06-20 19:14:39.462164335 +0000 UTC m=+1.223651607" Jun 20 19:14:40.350718 sudo[2175]: pam_unix(sudo:session): session closed for user root Jun 20 19:14:40.446141 sshd[2174]: Connection closed by 10.200.16.10 port 54840 Jun 20 19:14:40.446741 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Jun 20 19:14:40.450328 systemd[1]: sshd@6-10.200.4.34:22-10.200.16.10:54840.service: Deactivated successfully. Jun 20 19:14:40.453244 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:14:40.453469 systemd[1]: session-9.scope: Consumed 3.747s CPU time, 269.4M memory peak. Jun 20 19:14:40.454773 systemd-logind[1704]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:14:40.456061 systemd-logind[1704]: Removed session 9. Jun 20 19:14:42.993148 kubelet[3184]: I0620 19:14:42.993117 3184 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:14:42.993754 containerd[1733]: time="2025-06-20T19:14:42.993703413Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:14:42.994118 kubelet[3184]: I0620 19:14:42.993997 3184 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:14:44.048728 systemd[1]: Created slice kubepods-besteffort-podd0893e93_8404_4561_9e4f_c0be422a4f64.slice - libcontainer container kubepods-besteffort-podd0893e93_8404_4561_9e4f_c0be422a4f64.slice. Jun 20 19:14:44.059141 kubelet[3184]: I0620 19:14:44.058011 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0893e93-8404-4561-9e4f-c0be422a4f64-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-f7q8z\" (UID: \"d0893e93-8404-4561-9e4f-c0be422a4f64\") " pod="kube-system/cilium-operator-6c4d7847fc-f7q8z" Jun 20 19:14:44.059482 kubelet[3184]: I0620 19:14:44.059204 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvpl\" (UniqueName: \"kubernetes.io/projected/d0893e93-8404-4561-9e4f-c0be422a4f64-kube-api-access-fjvpl\") pod \"cilium-operator-6c4d7847fc-f7q8z\" (UID: \"d0893e93-8404-4561-9e4f-c0be422a4f64\") " pod="kube-system/cilium-operator-6c4d7847fc-f7q8z" Jun 20 19:14:44.092226 systemd[1]: Created slice kubepods-besteffort-podef3518eb_12e7_409a_aa23_7daad629fb09.slice - libcontainer container kubepods-besteffort-podef3518eb_12e7_409a_aa23_7daad629fb09.slice. Jun 20 19:14:44.106630 systemd[1]: Created slice kubepods-burstable-pod719c4aab_1100_4733_a991_f584034c069d.slice - libcontainer container kubepods-burstable-pod719c4aab_1100_4733_a991_f584034c069d.slice. Jun 20 19:14:44.159406 kubelet[3184]: I0620 19:14:44.159367 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef3518eb-12e7-409a-aa23-7daad629fb09-lib-modules\") pod \"kube-proxy-f6kdq\" (UID: \"ef3518eb-12e7-409a-aa23-7daad629fb09\") " pod="kube-system/kube-proxy-f6kdq" Jun 20 19:14:44.159406 kubelet[3184]: I0620 19:14:44.159410 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfxp6\" (UniqueName: \"kubernetes.io/projected/ef3518eb-12e7-409a-aa23-7daad629fb09-kube-api-access-kfxp6\") pod \"kube-proxy-f6kdq\" (UID: \"ef3518eb-12e7-409a-aa23-7daad629fb09\") " pod="kube-system/kube-proxy-f6kdq" Jun 20 19:14:44.159586 kubelet[3184]: I0620 19:14:44.159434 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-host-proc-sys-net\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159586 kubelet[3184]: I0620 19:14:44.159468 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-host-proc-sys-kernel\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159586 kubelet[3184]: I0620 19:14:44.159484 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cilium-run\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159586 kubelet[3184]: I0620 19:14:44.159499 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-hostproc\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159586 kubelet[3184]: I0620 19:14:44.159514 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-etc-cni-netd\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159586 kubelet[3184]: I0620 19:14:44.159529 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-lib-modules\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159732 kubelet[3184]: I0620 19:14:44.159542 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-bpf-maps\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159732 kubelet[3184]: I0620 19:14:44.159557 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719c4aab-1100-4733-a991-f584034c069d-clustermesh-secrets\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159732 kubelet[3184]: I0620 19:14:44.159590 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef3518eb-12e7-409a-aa23-7daad629fb09-xtables-lock\") pod \"kube-proxy-f6kdq\" (UID: \"ef3518eb-12e7-409a-aa23-7daad629fb09\") " pod="kube-system/kube-proxy-f6kdq" Jun 20 19:14:44.159732 kubelet[3184]: I0620 19:14:44.159606 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cni-path\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159732 kubelet[3184]: I0620 19:14:44.159623 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef3518eb-12e7-409a-aa23-7daad629fb09-kube-proxy\") pod \"kube-proxy-f6kdq\" (UID: \"ef3518eb-12e7-409a-aa23-7daad629fb09\") " pod="kube-system/kube-proxy-f6kdq" Jun 20 19:14:44.159732 kubelet[3184]: I0620 19:14:44.159639 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx9b9\" (UniqueName: \"kubernetes.io/projected/719c4aab-1100-4733-a991-f584034c069d-kube-api-access-bx9b9\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159859 kubelet[3184]: I0620 19:14:44.159669 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cilium-cgroup\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159859 kubelet[3184]: I0620 19:14:44.159685 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-xtables-lock\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159859 kubelet[3184]: I0620 19:14:44.159705 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719c4aab-1100-4733-a991-f584034c069d-cilium-config-path\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.159859 kubelet[3184]: I0620 19:14:44.159721 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719c4aab-1100-4733-a991-f584034c069d-hubble-tls\") pod \"cilium-xxrs9\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " pod="kube-system/cilium-xxrs9" Jun 20 19:14:44.358443 containerd[1733]: time="2025-06-20T19:14:44.358318236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f7q8z,Uid:d0893e93-8404-4561-9e4f-c0be422a4f64,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:44.399796 containerd[1733]: time="2025-06-20T19:14:44.399265368Z" level=info msg="connecting to shim 15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84" address="unix:///run/containerd/s/fcf2fc3f782a7c14b047b1063e433c68781c49cba8dec707d5852859045564ad" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:44.401546 containerd[1733]: time="2025-06-20T19:14:44.401510063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6kdq,Uid:ef3518eb-12e7-409a-aa23-7daad629fb09,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:44.411480 containerd[1733]: time="2025-06-20T19:14:44.411435684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xxrs9,Uid:719c4aab-1100-4733-a991-f584034c069d,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:44.419658 systemd[1]: Started cri-containerd-15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84.scope - libcontainer container 15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84. Jun 20 19:14:44.459349 containerd[1733]: time="2025-06-20T19:14:44.459086877Z" level=info msg="connecting to shim 9d5d6a02ed110e0e7a87b2e02239825d6a2ae2e6d4d9fc1959df44af5754b6de" address="unix:///run/containerd/s/3509aee3b43c2c25d498b7cb238af690f68f4b2da37e6faac1f40fa3c958c4fd" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:44.466599 containerd[1733]: time="2025-06-20T19:14:44.466567618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f7q8z,Uid:d0893e93-8404-4561-9e4f-c0be422a4f64,Namespace:kube-system,Attempt:0,} returns sandbox id \"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\"" Jun 20 19:14:44.469856 containerd[1733]: time="2025-06-20T19:14:44.469784347Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:14:44.473708 containerd[1733]: time="2025-06-20T19:14:44.473667815Z" level=info msg="connecting to shim d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde" address="unix:///run/containerd/s/dcc8330f1652ef82cef9468d567232b6f71f86f3fcb1da79acff429034995ce6" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:44.490600 systemd[1]: Started cri-containerd-9d5d6a02ed110e0e7a87b2e02239825d6a2ae2e6d4d9fc1959df44af5754b6de.scope - libcontainer container 9d5d6a02ed110e0e7a87b2e02239825d6a2ae2e6d4d9fc1959df44af5754b6de. Jun 20 19:14:44.495784 systemd[1]: Started cri-containerd-d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde.scope - libcontainer container d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde. Jun 20 19:14:44.525885 containerd[1733]: time="2025-06-20T19:14:44.525844134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xxrs9,Uid:719c4aab-1100-4733-a991-f584034c069d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\"" Jun 20 19:14:44.529638 containerd[1733]: time="2025-06-20T19:14:44.529582141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6kdq,Uid:ef3518eb-12e7-409a-aa23-7daad629fb09,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d5d6a02ed110e0e7a87b2e02239825d6a2ae2e6d4d9fc1959df44af5754b6de\"" Jun 20 19:14:44.532427 containerd[1733]: time="2025-06-20T19:14:44.532401818Z" level=info msg="CreateContainer within sandbox \"9d5d6a02ed110e0e7a87b2e02239825d6a2ae2e6d4d9fc1959df44af5754b6de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:14:44.550476 containerd[1733]: time="2025-06-20T19:14:44.550446991Z" level=info msg="Container 2fd3393d4fae0b797fe9f5341e55151b9ff8ea831321d201610ce4dee2419485: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:44.565750 containerd[1733]: time="2025-06-20T19:14:44.565717268Z" level=info msg="CreateContainer within sandbox \"9d5d6a02ed110e0e7a87b2e02239825d6a2ae2e6d4d9fc1959df44af5754b6de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2fd3393d4fae0b797fe9f5341e55151b9ff8ea831321d201610ce4dee2419485\"" Jun 20 19:14:44.566304 containerd[1733]: time="2025-06-20T19:14:44.566281675Z" level=info msg="StartContainer for \"2fd3393d4fae0b797fe9f5341e55151b9ff8ea831321d201610ce4dee2419485\"" Jun 20 19:14:44.571881 containerd[1733]: time="2025-06-20T19:14:44.571606367Z" level=info msg="connecting to shim 2fd3393d4fae0b797fe9f5341e55151b9ff8ea831321d201610ce4dee2419485" address="unix:///run/containerd/s/3509aee3b43c2c25d498b7cb238af690f68f4b2da37e6faac1f40fa3c958c4fd" protocol=ttrpc version=3 Jun 20 19:14:44.600552 systemd[1]: Started cri-containerd-2fd3393d4fae0b797fe9f5341e55151b9ff8ea831321d201610ce4dee2419485.scope - libcontainer container 2fd3393d4fae0b797fe9f5341e55151b9ff8ea831321d201610ce4dee2419485. Jun 20 19:14:44.631943 containerd[1733]: time="2025-06-20T19:14:44.631416986Z" level=info msg="StartContainer for \"2fd3393d4fae0b797fe9f5341e55151b9ff8ea831321d201610ce4dee2419485\" returns successfully" Jun 20 19:14:45.424292 kubelet[3184]: I0620 19:14:45.424226 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f6kdq" podStartSLOduration=1.424206901 podStartE2EDuration="1.424206901s" podCreationTimestamp="2025-06-20 19:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:45.411808269 +0000 UTC m=+7.173295539" watchObservedRunningTime="2025-06-20 19:14:45.424206901 +0000 UTC m=+7.185694248" Jun 20 19:14:46.363916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256178025.mount: Deactivated successfully. Jun 20 19:14:50.336225 containerd[1733]: time="2025-06-20T19:14:50.336177895Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:50.338684 containerd[1733]: time="2025-06-20T19:14:50.338646503Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:14:50.342786 containerd[1733]: time="2025-06-20T19:14:50.342741959Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:50.343640 containerd[1733]: time="2025-06-20T19:14:50.343532253Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.873714386s" Jun 20 19:14:50.343640 containerd[1733]: time="2025-06-20T19:14:50.343566162Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:14:50.344661 containerd[1733]: time="2025-06-20T19:14:50.344557098Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:14:50.345742 containerd[1733]: time="2025-06-20T19:14:50.345703902Z" level=info msg="CreateContainer within sandbox \"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:14:50.368203 containerd[1733]: time="2025-06-20T19:14:50.367478413Z" level=info msg="Container 1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:50.379897 containerd[1733]: time="2025-06-20T19:14:50.379866200Z" level=info msg="CreateContainer within sandbox \"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\"" Jun 20 19:14:50.381035 containerd[1733]: time="2025-06-20T19:14:50.380351740Z" level=info msg="StartContainer for \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\"" Jun 20 19:14:50.381696 containerd[1733]: time="2025-06-20T19:14:50.381423838Z" level=info msg="connecting to shim 1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2" address="unix:///run/containerd/s/fcf2fc3f782a7c14b047b1063e433c68781c49cba8dec707d5852859045564ad" protocol=ttrpc version=3 Jun 20 19:14:50.402528 systemd[1]: Started cri-containerd-1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2.scope - libcontainer container 1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2. Jun 20 19:14:50.432447 containerd[1733]: time="2025-06-20T19:14:50.432352976Z" level=info msg="StartContainer for \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" returns successfully" Jun 20 19:15:01.332084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522271709.mount: Deactivated successfully. Jun 20 19:15:02.999503 containerd[1733]: time="2025-06-20T19:15:02.999451027Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:03.003122 containerd[1733]: time="2025-06-20T19:15:03.003080734Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:15:03.006727 containerd[1733]: time="2025-06-20T19:15:03.006686404Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:03.007824 containerd[1733]: time="2025-06-20T19:15:03.007678312Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.663089959s" Jun 20 19:15:03.007824 containerd[1733]: time="2025-06-20T19:15:03.007714800Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:15:03.010064 containerd[1733]: time="2025-06-20T19:15:03.010029449Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:15:03.030991 containerd[1733]: time="2025-06-20T19:15:03.030876956Z" level=info msg="Container f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:03.045048 containerd[1733]: time="2025-06-20T19:15:03.045010345Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\"" Jun 20 19:15:03.045501 containerd[1733]: time="2025-06-20T19:15:03.045476874Z" level=info msg="StartContainer for \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\"" Jun 20 19:15:03.046536 containerd[1733]: time="2025-06-20T19:15:03.046335414Z" level=info msg="connecting to shim f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533" address="unix:///run/containerd/s/dcc8330f1652ef82cef9468d567232b6f71f86f3fcb1da79acff429034995ce6" protocol=ttrpc version=3 Jun 20 19:15:03.068515 systemd[1]: Started cri-containerd-f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533.scope - libcontainer container f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533. Jun 20 19:15:03.100192 containerd[1733]: time="2025-06-20T19:15:03.100052910Z" level=info msg="StartContainer for \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\" returns successfully" Jun 20 19:15:03.107146 systemd[1]: cri-containerd-f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533.scope: Deactivated successfully. Jun 20 19:15:03.108138 containerd[1733]: time="2025-06-20T19:15:03.108068941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\" id:\"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\" pid:3644 exited_at:{seconds:1750446903 nanos:107495841}" Jun 20 19:15:03.108268 containerd[1733]: time="2025-06-20T19:15:03.108218961Z" level=info msg="received exit event container_id:\"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\" id:\"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\" pid:3644 exited_at:{seconds:1750446903 nanos:107495841}" Jun 20 19:15:03.123906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533-rootfs.mount: Deactivated successfully. Jun 20 19:15:03.513325 kubelet[3184]: I0620 19:15:03.455548 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-f7q8z" podStartSLOduration=13.579775929 podStartE2EDuration="19.455530242s" podCreationTimestamp="2025-06-20 19:14:44 +0000 UTC" firstStartedPulling="2025-06-20 19:14:44.468699429 +0000 UTC m=+6.230186694" lastFinishedPulling="2025-06-20 19:14:50.344453737 +0000 UTC m=+12.105941007" observedRunningTime="2025-06-20 19:14:51.42806724 +0000 UTC m=+13.189554510" watchObservedRunningTime="2025-06-20 19:15:03.455530242 +0000 UTC m=+25.217017514" Jun 20 19:15:07.449873 containerd[1733]: time="2025-06-20T19:15:07.449413413Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:15:07.474479 containerd[1733]: time="2025-06-20T19:15:07.473451086Z" level=info msg="Container f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:07.485667 containerd[1733]: time="2025-06-20T19:15:07.485633807Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\"" Jun 20 19:15:07.486151 containerd[1733]: time="2025-06-20T19:15:07.486131583Z" level=info msg="StartContainer for \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\"" Jun 20 19:15:07.487331 containerd[1733]: time="2025-06-20T19:15:07.487293932Z" level=info msg="connecting to shim f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295" address="unix:///run/containerd/s/dcc8330f1652ef82cef9468d567232b6f71f86f3fcb1da79acff429034995ce6" protocol=ttrpc version=3 Jun 20 19:15:07.510518 systemd[1]: Started cri-containerd-f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295.scope - libcontainer container f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295. Jun 20 19:15:07.537680 containerd[1733]: time="2025-06-20T19:15:07.537560839Z" level=info msg="StartContainer for \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\" returns successfully" Jun 20 19:15:07.547969 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:15:07.548510 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:15:07.548798 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:15:07.551645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:15:07.553920 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:15:07.555236 systemd[1]: cri-containerd-f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295.scope: Deactivated successfully. Jun 20 19:15:07.556675 containerd[1733]: time="2025-06-20T19:15:07.556638190Z" level=info msg="received exit event container_id:\"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\" id:\"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\" pid:3695 exited_at:{seconds:1750446907 nanos:553716547}" Jun 20 19:15:07.556928 containerd[1733]: time="2025-06-20T19:15:07.556895275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\" id:\"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\" pid:3695 exited_at:{seconds:1750446907 nanos:553716547}" Jun 20 19:15:07.575835 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:15:07.580161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295-rootfs.mount: Deactivated successfully. Jun 20 19:15:08.452713 containerd[1733]: time="2025-06-20T19:15:08.452664586Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:15:08.480551 containerd[1733]: time="2025-06-20T19:15:08.477952688Z" level=info msg="Container 85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:08.497068 containerd[1733]: time="2025-06-20T19:15:08.497027772Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\"" Jun 20 19:15:08.498790 containerd[1733]: time="2025-06-20T19:15:08.497519992Z" level=info msg="StartContainer for \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\"" Jun 20 19:15:08.498890 containerd[1733]: time="2025-06-20T19:15:08.498843082Z" level=info msg="connecting to shim 85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5" address="unix:///run/containerd/s/dcc8330f1652ef82cef9468d567232b6f71f86f3fcb1da79acff429034995ce6" protocol=ttrpc version=3 Jun 20 19:15:08.524531 systemd[1]: Started cri-containerd-85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5.scope - libcontainer container 85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5. Jun 20 19:15:08.551268 systemd[1]: cri-containerd-85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5.scope: Deactivated successfully. Jun 20 19:15:08.554934 containerd[1733]: time="2025-06-20T19:15:08.554528408Z" level=info msg="received exit event container_id:\"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\" id:\"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\" pid:3742 exited_at:{seconds:1750446908 nanos:553718117}" Jun 20 19:15:08.555288 containerd[1733]: time="2025-06-20T19:15:08.555196629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\" id:\"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\" pid:3742 exited_at:{seconds:1750446908 nanos:553718117}" Jun 20 19:15:08.558378 containerd[1733]: time="2025-06-20T19:15:08.558236631Z" level=info msg="StartContainer for \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\" returns successfully" Jun 20 19:15:08.577297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5-rootfs.mount: Deactivated successfully. Jun 20 19:15:09.457596 containerd[1733]: time="2025-06-20T19:15:09.457421246Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:15:09.620137 containerd[1733]: time="2025-06-20T19:15:09.620089007Z" level=info msg="Container 1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:09.712523 containerd[1733]: time="2025-06-20T19:15:09.712413846Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\"" Jun 20 19:15:09.713173 containerd[1733]: time="2025-06-20T19:15:09.713127598Z" level=info msg="StartContainer for \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\"" Jun 20 19:15:09.714093 containerd[1733]: time="2025-06-20T19:15:09.714058597Z" level=info msg="connecting to shim 1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11" address="unix:///run/containerd/s/dcc8330f1652ef82cef9468d567232b6f71f86f3fcb1da79acff429034995ce6" protocol=ttrpc version=3 Jun 20 19:15:09.738529 systemd[1]: Started cri-containerd-1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11.scope - libcontainer container 1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11. Jun 20 19:15:09.761195 systemd[1]: cri-containerd-1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11.scope: Deactivated successfully. Jun 20 19:15:09.762305 containerd[1733]: time="2025-06-20T19:15:09.762118024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\" id:\"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\" pid:3784 exited_at:{seconds:1750446909 nanos:761587646}" Jun 20 19:15:09.766132 containerd[1733]: time="2025-06-20T19:15:09.766029260Z" level=info msg="received exit event container_id:\"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\" id:\"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\" pid:3784 exited_at:{seconds:1750446909 nanos:761587646}" Jun 20 19:15:09.772154 containerd[1733]: time="2025-06-20T19:15:09.772125868Z" level=info msg="StartContainer for \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\" returns successfully" Jun 20 19:15:09.783040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11-rootfs.mount: Deactivated successfully. Jun 20 19:15:11.467734 containerd[1733]: time="2025-06-20T19:15:11.467685316Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:15:11.566186 containerd[1733]: time="2025-06-20T19:15:11.566135327Z" level=info msg="Container 29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:11.702037 containerd[1733]: time="2025-06-20T19:15:11.701989528Z" level=info msg="CreateContainer within sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\"" Jun 20 19:15:11.702680 containerd[1733]: time="2025-06-20T19:15:11.702617911Z" level=info msg="StartContainer for \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\"" Jun 20 19:15:11.703831 containerd[1733]: time="2025-06-20T19:15:11.703759876Z" level=info msg="connecting to shim 29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76" address="unix:///run/containerd/s/dcc8330f1652ef82cef9468d567232b6f71f86f3fcb1da79acff429034995ce6" protocol=ttrpc version=3 Jun 20 19:15:11.728534 systemd[1]: Started cri-containerd-29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76.scope - libcontainer container 29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76. Jun 20 19:15:11.758856 containerd[1733]: time="2025-06-20T19:15:11.758814134Z" level=info msg="StartContainer for \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" returns successfully" Jun 20 19:15:11.822953 containerd[1733]: time="2025-06-20T19:15:11.822903974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" id:\"2146ae0535a8d091af48ec95d972995a0b7fd60349cf1e931ac2a39cea846fbe\" pid:3854 exited_at:{seconds:1750446911 nanos:822520084}" Jun 20 19:15:11.923133 kubelet[3184]: I0620 19:15:11.923095 3184 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:15:11.962073 kubelet[3184]: I0620 19:15:11.962028 3184 status_manager.go:890] "Failed to get status for pod" podUID="a57b5ee9-2403-4579-9d1a-3b09291419fc" pod="kube-system/coredns-668d6bf9bc-kxgvl" err="pods \"coredns-668d6bf9bc-kxgvl\" is forbidden: User \"system:node:ci-4344.1.0-a-abe1e11a87\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.0-a-abe1e11a87' and this object" Jun 20 19:15:11.969769 systemd[1]: Created slice kubepods-burstable-poda57b5ee9_2403_4579_9d1a_3b09291419fc.slice - libcontainer container kubepods-burstable-poda57b5ee9_2403_4579_9d1a_3b09291419fc.slice. Jun 20 19:15:11.978826 systemd[1]: Created slice kubepods-burstable-pod5889ab45_61a9_4ede_8f8a_370eeb6cef14.slice - libcontainer container kubepods-burstable-pod5889ab45_61a9_4ede_8f8a_370eeb6cef14.slice. Jun 20 19:15:12.038232 kubelet[3184]: I0620 19:15:12.038189 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5889ab45-61a9-4ede-8f8a-370eeb6cef14-config-volume\") pod \"coredns-668d6bf9bc-7kk5q\" (UID: \"5889ab45-61a9-4ede-8f8a-370eeb6cef14\") " pod="kube-system/coredns-668d6bf9bc-7kk5q" Jun 20 19:15:12.038232 kubelet[3184]: I0620 19:15:12.038228 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtr56\" (UniqueName: \"kubernetes.io/projected/a57b5ee9-2403-4579-9d1a-3b09291419fc-kube-api-access-jtr56\") pod \"coredns-668d6bf9bc-kxgvl\" (UID: \"a57b5ee9-2403-4579-9d1a-3b09291419fc\") " pod="kube-system/coredns-668d6bf9bc-kxgvl" Jun 20 19:15:12.038481 kubelet[3184]: I0620 19:15:12.038253 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7sjr\" (UniqueName: \"kubernetes.io/projected/5889ab45-61a9-4ede-8f8a-370eeb6cef14-kube-api-access-w7sjr\") pod \"coredns-668d6bf9bc-7kk5q\" (UID: \"5889ab45-61a9-4ede-8f8a-370eeb6cef14\") " pod="kube-system/coredns-668d6bf9bc-7kk5q" Jun 20 19:15:12.038481 kubelet[3184]: I0620 19:15:12.038272 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a57b5ee9-2403-4579-9d1a-3b09291419fc-config-volume\") pod \"coredns-668d6bf9bc-kxgvl\" (UID: \"a57b5ee9-2403-4579-9d1a-3b09291419fc\") " pod="kube-system/coredns-668d6bf9bc-kxgvl" Jun 20 19:15:12.276824 containerd[1733]: time="2025-06-20T19:15:12.276726964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kxgvl,Uid:a57b5ee9-2403-4579-9d1a-3b09291419fc,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:12.283620 containerd[1733]: time="2025-06-20T19:15:12.283542533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kk5q,Uid:5889ab45-61a9-4ede-8f8a-370eeb6cef14,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:12.487682 kubelet[3184]: I0620 19:15:12.487161 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xxrs9" podStartSLOduration=10.005472668 podStartE2EDuration="28.487133484s" podCreationTimestamp="2025-06-20 19:14:44 +0000 UTC" firstStartedPulling="2025-06-20 19:14:44.52690017 +0000 UTC m=+6.288387433" lastFinishedPulling="2025-06-20 19:15:03.008560973 +0000 UTC m=+24.770048249" observedRunningTime="2025-06-20 19:15:12.486808627 +0000 UTC m=+34.248295894" watchObservedRunningTime="2025-06-20 19:15:12.487133484 +0000 UTC m=+34.248620753" Jun 20 19:15:13.972903 systemd-networkd[1363]: cilium_host: Link UP Jun 20 19:15:13.973701 systemd-networkd[1363]: cilium_net: Link UP Jun 20 19:15:13.974148 systemd-networkd[1363]: cilium_net: Gained carrier Jun 20 19:15:13.974558 systemd-networkd[1363]: cilium_host: Gained carrier Jun 20 19:15:14.122301 systemd-networkd[1363]: cilium_vxlan: Link UP Jun 20 19:15:14.122317 systemd-networkd[1363]: cilium_vxlan: Gained carrier Jun 20 19:15:14.418392 kernel: NET: Registered PF_ALG protocol family Jun 20 19:15:14.536653 systemd-networkd[1363]: cilium_net: Gained IPv6LL Jun 20 19:15:14.920555 systemd-networkd[1363]: cilium_host: Gained IPv6LL Jun 20 19:15:14.968036 systemd-networkd[1363]: lxc_health: Link UP Jun 20 19:15:14.980491 systemd-networkd[1363]: lxc_health: Gained carrier Jun 20 19:15:15.329880 systemd-networkd[1363]: lxcac66a6c79f0e: Link UP Jun 20 19:15:15.335442 kernel: eth0: renamed from tmp1ce40 Jun 20 19:15:15.336760 systemd-networkd[1363]: lxcac66a6c79f0e: Gained carrier Jun 20 19:15:15.385584 kernel: eth0: renamed from tmp0bcee Jun 20 19:15:15.384548 systemd-networkd[1363]: lxc6da3c5f22a22: Link UP Jun 20 19:15:15.389925 systemd-networkd[1363]: lxc6da3c5f22a22: Gained carrier Jun 20 19:15:15.880601 systemd-networkd[1363]: cilium_vxlan: Gained IPv6LL Jun 20 19:15:16.392658 systemd-networkd[1363]: lxc6da3c5f22a22: Gained IPv6LL Jun 20 19:15:16.776627 systemd-networkd[1363]: lxc_health: Gained IPv6LL Jun 20 19:15:16.840649 systemd-networkd[1363]: lxcac66a6c79f0e: Gained IPv6LL Jun 20 19:15:18.966856 containerd[1733]: time="2025-06-20T19:15:18.966810362Z" level=info msg="connecting to shim 1ce40324f879f1d3840abee566463fae3b543938c0fb439f3c9f4ddd9092c598" address="unix:///run/containerd/s/716ab7a883369ea3f2b56a68102baf86d13a276269890918b8e303bfd8d7fd8d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:18.987526 systemd[1]: Started cri-containerd-1ce40324f879f1d3840abee566463fae3b543938c0fb439f3c9f4ddd9092c598.scope - libcontainer container 1ce40324f879f1d3840abee566463fae3b543938c0fb439f3c9f4ddd9092c598. Jun 20 19:15:19.016008 containerd[1733]: time="2025-06-20T19:15:19.015797379Z" level=info msg="connecting to shim 0bcee2b325417eb69b9b03c7cd54ca37c13c105beb77395caa1a66d9e3054e5b" address="unix:///run/containerd/s/e29192548ca1beeeb93d488c411efa5c65265e00e9178b8c7cedc9c83aef5e96" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:19.046542 systemd[1]: Started cri-containerd-0bcee2b325417eb69b9b03c7cd54ca37c13c105beb77395caa1a66d9e3054e5b.scope - libcontainer container 0bcee2b325417eb69b9b03c7cd54ca37c13c105beb77395caa1a66d9e3054e5b. Jun 20 19:15:19.052676 containerd[1733]: time="2025-06-20T19:15:19.052617250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kxgvl,Uid:a57b5ee9-2403-4579-9d1a-3b09291419fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ce40324f879f1d3840abee566463fae3b543938c0fb439f3c9f4ddd9092c598\"" Jun 20 19:15:19.057813 containerd[1733]: time="2025-06-20T19:15:19.057750178Z" level=info msg="CreateContainer within sandbox \"1ce40324f879f1d3840abee566463fae3b543938c0fb439f3c9f4ddd9092c598\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:15:19.210854 containerd[1733]: time="2025-06-20T19:15:19.210803599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7kk5q,Uid:5889ab45-61a9-4ede-8f8a-370eeb6cef14,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bcee2b325417eb69b9b03c7cd54ca37c13c105beb77395caa1a66d9e3054e5b\"" Jun 20 19:15:19.213231 containerd[1733]: time="2025-06-20T19:15:19.213197434Z" level=info msg="CreateContainer within sandbox \"0bcee2b325417eb69b9b03c7cd54ca37c13c105beb77395caa1a66d9e3054e5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:15:19.309762 containerd[1733]: time="2025-06-20T19:15:19.309440423Z" level=info msg="Container 7c9df3bc8cabea1959a12be60b3c9e15440470773f385dfc8efe0010b6640409: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:19.565342 containerd[1733]: time="2025-06-20T19:15:19.565202676Z" level=info msg="CreateContainer within sandbox \"1ce40324f879f1d3840abee566463fae3b543938c0fb439f3c9f4ddd9092c598\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c9df3bc8cabea1959a12be60b3c9e15440470773f385dfc8efe0010b6640409\"" Jun 20 19:15:19.567576 containerd[1733]: time="2025-06-20T19:15:19.566012019Z" level=info msg="StartContainer for \"7c9df3bc8cabea1959a12be60b3c9e15440470773f385dfc8efe0010b6640409\"" Jun 20 19:15:19.567576 containerd[1733]: time="2025-06-20T19:15:19.567289524Z" level=info msg="connecting to shim 7c9df3bc8cabea1959a12be60b3c9e15440470773f385dfc8efe0010b6640409" address="unix:///run/containerd/s/716ab7a883369ea3f2b56a68102baf86d13a276269890918b8e303bfd8d7fd8d" protocol=ttrpc version=3 Jun 20 19:15:19.587605 systemd[1]: Started cri-containerd-7c9df3bc8cabea1959a12be60b3c9e15440470773f385dfc8efe0010b6640409.scope - libcontainer container 7c9df3bc8cabea1959a12be60b3c9e15440470773f385dfc8efe0010b6640409. Jun 20 19:15:19.616391 containerd[1733]: time="2025-06-20T19:15:19.616208110Z" level=info msg="Container 4682f908745a84ee5d8af62d48292d28a7aaa36c83db0357532336e866e2a51c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:19.656247 containerd[1733]: time="2025-06-20T19:15:19.656195588Z" level=info msg="StartContainer for \"7c9df3bc8cabea1959a12be60b3c9e15440470773f385dfc8efe0010b6640409\" returns successfully" Jun 20 19:15:19.753692 containerd[1733]: time="2025-06-20T19:15:19.753644643Z" level=info msg="CreateContainer within sandbox \"0bcee2b325417eb69b9b03c7cd54ca37c13c105beb77395caa1a66d9e3054e5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4682f908745a84ee5d8af62d48292d28a7aaa36c83db0357532336e866e2a51c\"" Jun 20 19:15:19.754263 containerd[1733]: time="2025-06-20T19:15:19.754224916Z" level=info msg="StartContainer for \"4682f908745a84ee5d8af62d48292d28a7aaa36c83db0357532336e866e2a51c\"" Jun 20 19:15:19.755256 containerd[1733]: time="2025-06-20T19:15:19.755216969Z" level=info msg="connecting to shim 4682f908745a84ee5d8af62d48292d28a7aaa36c83db0357532336e866e2a51c" address="unix:///run/containerd/s/e29192548ca1beeeb93d488c411efa5c65265e00e9178b8c7cedc9c83aef5e96" protocol=ttrpc version=3 Jun 20 19:15:19.775528 systemd[1]: Started cri-containerd-4682f908745a84ee5d8af62d48292d28a7aaa36c83db0357532336e866e2a51c.scope - libcontainer container 4682f908745a84ee5d8af62d48292d28a7aaa36c83db0357532336e866e2a51c. Jun 20 19:15:19.801816 containerd[1733]: time="2025-06-20T19:15:19.801774846Z" level=info msg="StartContainer for \"4682f908745a84ee5d8af62d48292d28a7aaa36c83db0357532336e866e2a51c\" returns successfully" Jun 20 19:15:20.511013 kubelet[3184]: I0620 19:15:20.510949 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7kk5q" podStartSLOduration=36.510842525 podStartE2EDuration="36.510842525s" podCreationTimestamp="2025-06-20 19:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:20.510332206 +0000 UTC m=+42.271819475" watchObservedRunningTime="2025-06-20 19:15:20.510842525 +0000 UTC m=+42.272329795" Jun 20 19:15:20.545891 kubelet[3184]: I0620 19:15:20.545821 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kxgvl" podStartSLOduration=36.54578679 podStartE2EDuration="36.54578679s" podCreationTimestamp="2025-06-20 19:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:20.525984608 +0000 UTC m=+42.287471880" watchObservedRunningTime="2025-06-20 19:15:20.54578679 +0000 UTC m=+42.307274082" Jun 20 19:16:18.803553 systemd[1]: Started sshd@7-10.200.4.34:22-10.200.16.10:50464.service - OpenSSH per-connection server daemon (10.200.16.10:50464). Jun 20 19:16:19.405172 sshd[4498]: Accepted publickey for core from 10.200.16.10 port 50464 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:19.406489 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:19.411313 systemd-logind[1704]: New session 10 of user core. Jun 20 19:16:19.416518 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:16:19.890581 sshd[4500]: Connection closed by 10.200.16.10 port 50464 Jun 20 19:16:19.891205 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:19.894201 systemd[1]: sshd@7-10.200.4.34:22-10.200.16.10:50464.service: Deactivated successfully. Jun 20 19:16:19.896112 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:16:19.897849 systemd-logind[1704]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:16:19.899458 systemd-logind[1704]: Removed session 10. Jun 20 19:16:24.995841 systemd[1]: Started sshd@8-10.200.4.34:22-10.200.16.10:50466.service - OpenSSH per-connection server daemon (10.200.16.10:50466). Jun 20 19:16:25.591659 sshd[4513]: Accepted publickey for core from 10.200.16.10 port 50466 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:25.593079 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:25.597350 systemd-logind[1704]: New session 11 of user core. Jun 20 19:16:25.605525 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:16:26.060730 sshd[4515]: Connection closed by 10.200.16.10 port 50466 Jun 20 19:16:26.061327 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:26.064167 systemd[1]: sshd@8-10.200.4.34:22-10.200.16.10:50466.service: Deactivated successfully. Jun 20 19:16:26.066163 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:16:26.067645 systemd-logind[1704]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:16:26.069041 systemd-logind[1704]: Removed session 11. Jun 20 19:16:31.174887 systemd[1]: Started sshd@9-10.200.4.34:22-10.200.16.10:41060.service - OpenSSH per-connection server daemon (10.200.16.10:41060). Jun 20 19:16:31.772538 sshd[4528]: Accepted publickey for core from 10.200.16.10 port 41060 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:31.773979 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:31.778892 systemd-logind[1704]: New session 12 of user core. Jun 20 19:16:31.781532 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:16:32.248730 sshd[4530]: Connection closed by 10.200.16.10 port 41060 Jun 20 19:16:32.249392 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:32.252208 systemd[1]: sshd@9-10.200.4.34:22-10.200.16.10:41060.service: Deactivated successfully. Jun 20 19:16:32.254592 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:16:32.257591 systemd-logind[1704]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:16:32.259468 systemd-logind[1704]: Removed session 12. Jun 20 19:16:37.356778 systemd[1]: Started sshd@10-10.200.4.34:22-10.200.16.10:41068.service - OpenSSH per-connection server daemon (10.200.16.10:41068). Jun 20 19:16:37.956474 sshd[4545]: Accepted publickey for core from 10.200.16.10 port 41068 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:37.957692 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:37.961957 systemd-logind[1704]: New session 13 of user core. Jun 20 19:16:37.966552 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:16:38.430068 sshd[4547]: Connection closed by 10.200.16.10 port 41068 Jun 20 19:16:38.431292 sshd-session[4545]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:38.434690 systemd[1]: sshd@10-10.200.4.34:22-10.200.16.10:41068.service: Deactivated successfully. Jun 20 19:16:38.436659 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:16:38.438089 systemd-logind[1704]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:16:38.440443 systemd-logind[1704]: Removed session 13. Jun 20 19:16:38.537953 systemd[1]: Started sshd@11-10.200.4.34:22-10.200.16.10:49860.service - OpenSSH per-connection server daemon (10.200.16.10:49860). Jun 20 19:16:39.139210 sshd[4562]: Accepted publickey for core from 10.200.16.10 port 49860 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:39.141077 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:39.145596 systemd-logind[1704]: New session 14 of user core. Jun 20 19:16:39.151535 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:16:39.640649 sshd[4564]: Connection closed by 10.200.16.10 port 49860 Jun 20 19:16:39.641225 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:39.643982 systemd[1]: sshd@11-10.200.4.34:22-10.200.16.10:49860.service: Deactivated successfully. Jun 20 19:16:39.646419 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:16:39.647720 systemd-logind[1704]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:16:39.648881 systemd-logind[1704]: Removed session 14. Jun 20 19:16:39.747902 systemd[1]: Started sshd@12-10.200.4.34:22-10.200.16.10:49866.service - OpenSSH per-connection server daemon (10.200.16.10:49866). Jun 20 19:16:40.343319 sshd[4574]: Accepted publickey for core from 10.200.16.10 port 49866 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:40.344511 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:40.348918 systemd-logind[1704]: New session 15 of user core. Jun 20 19:16:40.353531 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:16:40.819596 sshd[4576]: Connection closed by 10.200.16.10 port 49866 Jun 20 19:16:40.820020 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:40.823456 systemd[1]: sshd@12-10.200.4.34:22-10.200.16.10:49866.service: Deactivated successfully. Jun 20 19:16:40.825281 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:16:40.826118 systemd-logind[1704]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:16:40.827274 systemd-logind[1704]: Removed session 15. Jun 20 19:16:45.938712 systemd[1]: Started sshd@13-10.200.4.34:22-10.200.16.10:49878.service - OpenSSH per-connection server daemon (10.200.16.10:49878). Jun 20 19:16:46.537034 sshd[4589]: Accepted publickey for core from 10.200.16.10 port 49878 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:46.538420 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:46.543108 systemd-logind[1704]: New session 16 of user core. Jun 20 19:16:46.551504 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:16:47.014121 sshd[4591]: Connection closed by 10.200.16.10 port 49878 Jun 20 19:16:47.016825 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:47.020105 systemd[1]: sshd@13-10.200.4.34:22-10.200.16.10:49878.service: Deactivated successfully. Jun 20 19:16:47.021752 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:16:47.022596 systemd-logind[1704]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:16:47.023774 systemd-logind[1704]: Removed session 16. Jun 20 19:16:47.126648 systemd[1]: Started sshd@14-10.200.4.34:22-10.200.16.10:49886.service - OpenSSH per-connection server daemon (10.200.16.10:49886). Jun 20 19:16:47.731284 sshd[4603]: Accepted publickey for core from 10.200.16.10 port 49886 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:47.732874 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:47.737915 systemd-logind[1704]: New session 17 of user core. Jun 20 19:16:47.742520 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:16:48.242230 sshd[4605]: Connection closed by 10.200.16.10 port 49886 Jun 20 19:16:48.242942 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:48.246500 systemd[1]: sshd@14-10.200.4.34:22-10.200.16.10:49886.service: Deactivated successfully. Jun 20 19:16:48.248458 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:16:48.249433 systemd-logind[1704]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:16:48.251184 systemd-logind[1704]: Removed session 17. Jun 20 19:16:48.347871 systemd[1]: Started sshd@15-10.200.4.34:22-10.200.16.10:49902.service - OpenSSH per-connection server daemon (10.200.16.10:49902). Jun 20 19:16:48.950509 sshd[4614]: Accepted publickey for core from 10.200.16.10 port 49902 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:48.951911 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:48.957616 systemd-logind[1704]: New session 18 of user core. Jun 20 19:16:48.964555 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:16:50.168982 sshd[4616]: Connection closed by 10.200.16.10 port 49902 Jun 20 19:16:50.169655 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:50.172445 systemd[1]: sshd@15-10.200.4.34:22-10.200.16.10:49902.service: Deactivated successfully. Jun 20 19:16:50.174513 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:16:50.176501 systemd-logind[1704]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:16:50.177501 systemd-logind[1704]: Removed session 18. Jun 20 19:16:50.290812 systemd[1]: Started sshd@16-10.200.4.34:22-10.200.16.10:33066.service - OpenSSH per-connection server daemon (10.200.16.10:33066). Jun 20 19:16:50.893502 sshd[4633]: Accepted publickey for core from 10.200.16.10 port 33066 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:50.894663 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:50.899238 systemd-logind[1704]: New session 19 of user core. Jun 20 19:16:50.903514 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:16:51.448602 sshd[4635]: Connection closed by 10.200.16.10 port 33066 Jun 20 19:16:51.449267 sshd-session[4633]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:51.452977 systemd[1]: sshd@16-10.200.4.34:22-10.200.16.10:33066.service: Deactivated successfully. Jun 20 19:16:51.454862 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:16:51.455747 systemd-logind[1704]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:16:51.457082 systemd-logind[1704]: Removed session 19. Jun 20 19:16:51.558593 systemd[1]: Started sshd@17-10.200.4.34:22-10.200.16.10:33080.service - OpenSSH per-connection server daemon (10.200.16.10:33080). Jun 20 19:16:52.161979 sshd[4645]: Accepted publickey for core from 10.200.16.10 port 33080 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:52.163248 sshd-session[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:52.167443 systemd-logind[1704]: New session 20 of user core. Jun 20 19:16:52.170538 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:16:52.634715 sshd[4647]: Connection closed by 10.200.16.10 port 33080 Jun 20 19:16:52.635326 sshd-session[4645]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:52.638628 systemd[1]: sshd@17-10.200.4.34:22-10.200.16.10:33080.service: Deactivated successfully. Jun 20 19:16:52.640440 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:16:52.641138 systemd-logind[1704]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:16:52.642671 systemd-logind[1704]: Removed session 20. Jun 20 19:16:57.741801 systemd[1]: Started sshd@18-10.200.4.34:22-10.200.16.10:33088.service - OpenSSH per-connection server daemon (10.200.16.10:33088). Jun 20 19:16:58.345145 sshd[4661]: Accepted publickey for core from 10.200.16.10 port 33088 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:58.346350 sshd-session[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:58.350979 systemd-logind[1704]: New session 21 of user core. Jun 20 19:16:58.354524 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:16:58.823331 sshd[4663]: Connection closed by 10.200.16.10 port 33088 Jun 20 19:16:58.823929 sshd-session[4661]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:58.827276 systemd[1]: sshd@18-10.200.4.34:22-10.200.16.10:33088.service: Deactivated successfully. Jun 20 19:16:58.829182 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:16:58.830125 systemd-logind[1704]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:16:58.831477 systemd-logind[1704]: Removed session 21. Jun 20 19:17:03.972917 systemd[1]: Started sshd@19-10.200.4.34:22-10.200.16.10:47860.service - OpenSSH per-connection server daemon (10.200.16.10:47860). Jun 20 19:17:04.612190 sshd[4675]: Accepted publickey for core from 10.200.16.10 port 47860 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:04.613671 sshd-session[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:04.618344 systemd-logind[1704]: New session 22 of user core. Jun 20 19:17:04.623521 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:17:05.111518 sshd[4677]: Connection closed by 10.200.16.10 port 47860 Jun 20 19:17:05.112153 sshd-session[4675]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:05.114840 systemd[1]: sshd@19-10.200.4.34:22-10.200.16.10:47860.service: Deactivated successfully. Jun 20 19:17:05.116660 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:17:05.118711 systemd-logind[1704]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:17:05.119908 systemd-logind[1704]: Removed session 22. Jun 20 19:17:10.227976 systemd[1]: Started sshd@20-10.200.4.34:22-10.200.16.10:51570.service - OpenSSH per-connection server daemon (10.200.16.10:51570). Jun 20 19:17:10.829671 sshd[4689]: Accepted publickey for core from 10.200.16.10 port 51570 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:10.830942 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:10.835643 systemd-logind[1704]: New session 23 of user core. Jun 20 19:17:10.840538 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:17:11.296699 sshd[4691]: Connection closed by 10.200.16.10 port 51570 Jun 20 19:17:11.297340 sshd-session[4689]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:11.300427 systemd[1]: sshd@20-10.200.4.34:22-10.200.16.10:51570.service: Deactivated successfully. Jun 20 19:17:11.302340 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:17:11.303873 systemd-logind[1704]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:17:11.305425 systemd-logind[1704]: Removed session 23. Jun 20 19:17:11.403096 systemd[1]: Started sshd@21-10.200.4.34:22-10.200.16.10:51586.service - OpenSSH per-connection server daemon (10.200.16.10:51586). Jun 20 19:17:12.004401 sshd[4703]: Accepted publickey for core from 10.200.16.10 port 51586 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:12.005640 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:12.010145 systemd-logind[1704]: New session 24 of user core. Jun 20 19:17:12.014586 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:17:13.738113 containerd[1733]: time="2025-06-20T19:17:13.738049024Z" level=info msg="StopContainer for \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" with timeout 30 (s)" Jun 20 19:17:13.739816 containerd[1733]: time="2025-06-20T19:17:13.739394597Z" level=info msg="Stop container \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" with signal terminated" Jun 20 19:17:13.751539 systemd[1]: cri-containerd-1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2.scope: Deactivated successfully. Jun 20 19:17:13.753802 containerd[1733]: time="2025-06-20T19:17:13.753730357Z" level=info msg="received exit event container_id:\"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" id:\"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" pid:3582 exited_at:{seconds:1750447033 nanos:753302928}" Jun 20 19:17:13.754014 containerd[1733]: time="2025-06-20T19:17:13.753728321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" id:\"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" pid:3582 exited_at:{seconds:1750447033 nanos:753302928}" Jun 20 19:17:13.761604 containerd[1733]: time="2025-06-20T19:17:13.761570002Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:17:13.768532 containerd[1733]: time="2025-06-20T19:17:13.768497049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" id:\"6d10b8250319a204b60fb8cce76b8fc7ef3c96b1c96efd9c5b04720d5370c0f5\" pid:4731 exited_at:{seconds:1750447033 nanos:766081859}" Jun 20 19:17:13.770181 containerd[1733]: time="2025-06-20T19:17:13.770076962Z" level=info msg="StopContainer for \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" with timeout 2 (s)" Jun 20 19:17:13.770530 containerd[1733]: time="2025-06-20T19:17:13.770512374Z" level=info msg="Stop container \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" with signal terminated" Jun 20 19:17:13.779815 systemd-networkd[1363]: lxc_health: Link DOWN Jun 20 19:17:13.779822 systemd-networkd[1363]: lxc_health: Lost carrier Jun 20 19:17:13.784055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2-rootfs.mount: Deactivated successfully. Jun 20 19:17:13.795952 systemd[1]: cri-containerd-29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76.scope: Deactivated successfully. Jun 20 19:17:13.796228 systemd[1]: cri-containerd-29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76.scope: Consumed 5.565s CPU time, 128.1M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 19:17:13.797668 containerd[1733]: time="2025-06-20T19:17:13.797558007Z" level=info msg="received exit event container_id:\"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" id:\"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" pid:3822 exited_at:{seconds:1750447033 nanos:797342293}" Jun 20 19:17:13.797668 containerd[1733]: time="2025-06-20T19:17:13.797648626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" id:\"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" pid:3822 exited_at:{seconds:1750447033 nanos:797342293}" Jun 20 19:17:13.813403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76-rootfs.mount: Deactivated successfully. Jun 20 19:17:13.879907 containerd[1733]: time="2025-06-20T19:17:13.879757896Z" level=info msg="StopContainer for \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" returns successfully" Jun 20 19:17:13.880742 containerd[1733]: time="2025-06-20T19:17:13.880703841Z" level=info msg="StopPodSandbox for \"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\"" Jun 20 19:17:13.880973 containerd[1733]: time="2025-06-20T19:17:13.880913943Z" level=info msg="Container to stop \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:17:13.883300 containerd[1733]: time="2025-06-20T19:17:13.883275194Z" level=info msg="StopContainer for \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" returns successfully" Jun 20 19:17:13.883836 containerd[1733]: time="2025-06-20T19:17:13.883817960Z" level=info msg="StopPodSandbox for \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\"" Jun 20 19:17:13.884264 containerd[1733]: time="2025-06-20T19:17:13.884020676Z" level=info msg="Container to stop \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:17:13.884264 containerd[1733]: time="2025-06-20T19:17:13.884038001Z" level=info msg="Container to stop \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:17:13.884264 containerd[1733]: time="2025-06-20T19:17:13.884049797Z" level=info msg="Container to stop \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:17:13.884264 containerd[1733]: time="2025-06-20T19:17:13.884059388Z" level=info msg="Container to stop \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:17:13.884264 containerd[1733]: time="2025-06-20T19:17:13.884067609Z" level=info msg="Container to stop \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:17:13.890420 systemd[1]: cri-containerd-15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84.scope: Deactivated successfully. Jun 20 19:17:13.895866 systemd[1]: cri-containerd-d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde.scope: Deactivated successfully. Jun 20 19:17:13.898911 containerd[1733]: time="2025-06-20T19:17:13.898818314Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\" id:\"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\" pid:3291 exit_status:137 exited_at:{seconds:1750447033 nanos:896697344}" Jun 20 19:17:13.922671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde-rootfs.mount: Deactivated successfully. Jun 20 19:17:13.928101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84-rootfs.mount: Deactivated successfully. Jun 20 19:17:13.943036 containerd[1733]: time="2025-06-20T19:17:13.942966131Z" level=info msg="shim disconnected" id=d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde namespace=k8s.io Jun 20 19:17:13.943036 containerd[1733]: time="2025-06-20T19:17:13.943000616Z" level=warning msg="cleaning up after shim disconnected" id=d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde namespace=k8s.io Jun 20 19:17:13.943036 containerd[1733]: time="2025-06-20T19:17:13.943008371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:17:13.944099 containerd[1733]: time="2025-06-20T19:17:13.944069793Z" level=info msg="received exit event sandbox_id:\"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\" exit_status:137 exited_at:{seconds:1750447033 nanos:896697344}" Jun 20 19:17:13.944767 containerd[1733]: time="2025-06-20T19:17:13.944659429Z" level=info msg="shim disconnected" id=15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84 namespace=k8s.io Jun 20 19:17:13.944767 containerd[1733]: time="2025-06-20T19:17:13.944705482Z" level=warning msg="cleaning up after shim disconnected" id=15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84 namespace=k8s.io Jun 20 19:17:13.944767 containerd[1733]: time="2025-06-20T19:17:13.944714076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:17:13.949479 containerd[1733]: time="2025-06-20T19:17:13.949372912Z" level=info msg="TearDown network for sandbox \"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\" successfully" Jun 20 19:17:13.949479 containerd[1733]: time="2025-06-20T19:17:13.949400129Z" level=info msg="StopPodSandbox for \"15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84\" returns successfully" Jun 20 19:17:13.949689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15ec09e59713a41d41d52d48be48f1edc89c7ed5acebf684d5752985578ebd84-shm.mount: Deactivated successfully. Jun 20 19:17:13.963514 containerd[1733]: time="2025-06-20T19:17:13.963485537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" id:\"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" pid:3374 exit_status:137 exited_at:{seconds:1750447033 nanos:899621689}" Jun 20 19:17:13.964693 containerd[1733]: time="2025-06-20T19:17:13.964651319Z" level=info msg="received exit event sandbox_id:\"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" exit_status:137 exited_at:{seconds:1750447033 nanos:899621689}" Jun 20 19:17:13.965687 containerd[1733]: time="2025-06-20T19:17:13.965661988Z" level=info msg="TearDown network for sandbox \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" successfully" Jun 20 19:17:13.965687 containerd[1733]: time="2025-06-20T19:17:13.965688924Z" level=info msg="StopPodSandbox for \"d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde\" returns successfully" Jun 20 19:17:14.030090 kubelet[3184]: I0620 19:17:14.029964 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-bpf-maps\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.032958 kubelet[3184]: I0620 19:17:14.032487 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.032958 kubelet[3184]: I0620 19:17:14.032527 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0893e93-8404-4561-9e4f-c0be422a4f64-cilium-config-path\") pod \"d0893e93-8404-4561-9e4f-c0be422a4f64\" (UID: \"d0893e93-8404-4561-9e4f-c0be422a4f64\") " Jun 20 19:17:14.032958 kubelet[3184]: I0620 19:17:14.032551 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-host-proc-sys-kernel\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.032958 kubelet[3184]: I0620 19:17:14.032568 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cilium-run\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.032958 kubelet[3184]: I0620 19:17:14.032585 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cni-path\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.032958 kubelet[3184]: I0620 19:17:14.032610 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cilium-cgroup\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033190 kubelet[3184]: I0620 19:17:14.032632 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719c4aab-1100-4733-a991-f584034c069d-cilium-config-path\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033190 kubelet[3184]: I0620 19:17:14.032648 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-lib-modules\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033190 kubelet[3184]: I0620 19:17:14.032669 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjvpl\" (UniqueName: \"kubernetes.io/projected/d0893e93-8404-4561-9e4f-c0be422a4f64-kube-api-access-fjvpl\") pod \"d0893e93-8404-4561-9e4f-c0be422a4f64\" (UID: \"d0893e93-8404-4561-9e4f-c0be422a4f64\") " Jun 20 19:17:14.033190 kubelet[3184]: I0620 19:17:14.032686 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719c4aab-1100-4733-a991-f584034c069d-hubble-tls\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033190 kubelet[3184]: I0620 19:17:14.032700 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-etc-cni-netd\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033190 kubelet[3184]: I0620 19:17:14.032719 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx9b9\" (UniqueName: \"kubernetes.io/projected/719c4aab-1100-4733-a991-f584034c069d-kube-api-access-bx9b9\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033332 kubelet[3184]: I0620 19:17:14.032734 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-xtables-lock\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033332 kubelet[3184]: I0620 19:17:14.032754 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719c4aab-1100-4733-a991-f584034c069d-clustermesh-secrets\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033332 kubelet[3184]: I0620 19:17:14.032777 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-host-proc-sys-net\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033332 kubelet[3184]: I0620 19:17:14.032794 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-hostproc\") pod \"719c4aab-1100-4733-a991-f584034c069d\" (UID: \"719c4aab-1100-4733-a991-f584034c069d\") " Jun 20 19:17:14.033332 kubelet[3184]: I0620 19:17:14.032849 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-hostproc" (OuterVolumeSpecName: "hostproc") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.039387 kubelet[3184]: I0620 19:17:14.035267 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0893e93-8404-4561-9e4f-c0be422a4f64-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d0893e93-8404-4561-9e4f-c0be422a4f64" (UID: "d0893e93-8404-4561-9e4f-c0be422a4f64"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:17:14.039387 kubelet[3184]: I0620 19:17:14.035799 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.039387 kubelet[3184]: I0620 19:17:14.035836 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.039387 kubelet[3184]: I0620 19:17:14.035852 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cni-path" (OuterVolumeSpecName: "cni-path") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.039387 kubelet[3184]: I0620 19:17:14.035868 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.039576 kubelet[3184]: I0620 19:17:14.037551 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.039576 kubelet[3184]: I0620 19:17:14.037587 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.039576 kubelet[3184]: I0620 19:17:14.037612 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.039640 kubelet[3184]: I0620 19:17:14.039584 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:17:14.043902 kubelet[3184]: I0620 19:17:14.043872 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719c4aab-1100-4733-a991-f584034c069d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:17:14.045186 kubelet[3184]: I0620 19:17:14.044435 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719c4aab-1100-4733-a991-f584034c069d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:17:14.048515 kubelet[3184]: I0620 19:17:14.048490 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719c4aab-1100-4733-a991-f584034c069d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:17:14.048619 kubelet[3184]: I0620 19:17:14.048490 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0893e93-8404-4561-9e4f-c0be422a4f64-kube-api-access-fjvpl" (OuterVolumeSpecName: "kube-api-access-fjvpl") pod "d0893e93-8404-4561-9e4f-c0be422a4f64" (UID: "d0893e93-8404-4561-9e4f-c0be422a4f64"). InnerVolumeSpecName "kube-api-access-fjvpl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:17:14.048660 kubelet[3184]: I0620 19:17:14.048548 3184 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719c4aab-1100-4733-a991-f584034c069d-kube-api-access-bx9b9" (OuterVolumeSpecName: "kube-api-access-bx9b9") pod "719c4aab-1100-4733-a991-f584034c069d" (UID: "719c4aab-1100-4733-a991-f584034c069d"). InnerVolumeSpecName "kube-api-access-bx9b9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:17:14.133283 kubelet[3184]: I0620 19:17:14.133228 3184 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-bpf-maps\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133283 kubelet[3184]: I0620 19:17:14.133276 3184 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-host-proc-sys-kernel\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133283 kubelet[3184]: I0620 19:17:14.133286 3184 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cilium-run\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133283 kubelet[3184]: I0620 19:17:14.133295 3184 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cni-path\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133537 kubelet[3184]: I0620 19:17:14.133313 3184 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0893e93-8404-4561-9e4f-c0be422a4f64-cilium-config-path\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133537 kubelet[3184]: I0620 19:17:14.133322 3184 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-cilium-cgroup\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133537 kubelet[3184]: I0620 19:17:14.133332 3184 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719c4aab-1100-4733-a991-f584034c069d-cilium-config-path\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133537 kubelet[3184]: I0620 19:17:14.133339 3184 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-lib-modules\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133537 kubelet[3184]: I0620 19:17:14.133347 3184 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719c4aab-1100-4733-a991-f584034c069d-hubble-tls\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133537 kubelet[3184]: I0620 19:17:14.133382 3184 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-etc-cni-netd\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133537 kubelet[3184]: I0620 19:17:14.133390 3184 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bx9b9\" (UniqueName: \"kubernetes.io/projected/719c4aab-1100-4733-a991-f584034c069d-kube-api-access-bx9b9\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133537 kubelet[3184]: I0620 19:17:14.133398 3184 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-xtables-lock\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133662 kubelet[3184]: I0620 19:17:14.133407 3184 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fjvpl\" (UniqueName: \"kubernetes.io/projected/d0893e93-8404-4561-9e4f-c0be422a4f64-kube-api-access-fjvpl\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133662 kubelet[3184]: I0620 19:17:14.133416 3184 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719c4aab-1100-4733-a991-f584034c069d-clustermesh-secrets\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133662 kubelet[3184]: I0620 19:17:14.133424 3184 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-hostproc\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.133662 kubelet[3184]: I0620 19:17:14.133433 3184 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719c4aab-1100-4733-a991-f584034c069d-host-proc-sys-net\") on node \"ci-4344.1.0-a-abe1e11a87\" DevicePath \"\"" Jun 20 19:17:14.333802 systemd[1]: Removed slice kubepods-burstable-pod719c4aab_1100_4733_a991_f584034c069d.slice - libcontainer container kubepods-burstable-pod719c4aab_1100_4733_a991_f584034c069d.slice. Jun 20 19:17:14.333935 systemd[1]: kubepods-burstable-pod719c4aab_1100_4733_a991_f584034c069d.slice: Consumed 5.636s CPU time, 128.6M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 19:17:14.335524 systemd[1]: Removed slice kubepods-besteffort-podd0893e93_8404_4561_9e4f_c0be422a4f64.slice - libcontainer container kubepods-besteffort-podd0893e93_8404_4561_9e4f_c0be422a4f64.slice. Jun 20 19:17:14.718018 kubelet[3184]: I0620 19:17:14.717937 3184 scope.go:117] "RemoveContainer" containerID="29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76" Jun 20 19:17:14.721228 containerd[1733]: time="2025-06-20T19:17:14.721180103Z" level=info msg="RemoveContainer for \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\"" Jun 20 19:17:14.735046 containerd[1733]: time="2025-06-20T19:17:14.735004731Z" level=info msg="RemoveContainer for \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" returns successfully" Jun 20 19:17:14.735492 kubelet[3184]: I0620 19:17:14.735414 3184 scope.go:117] "RemoveContainer" containerID="1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11" Jun 20 19:17:14.737443 containerd[1733]: time="2025-06-20T19:17:14.737229115Z" level=info msg="RemoveContainer for \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\"" Jun 20 19:17:14.748344 containerd[1733]: time="2025-06-20T19:17:14.748298730Z" level=info msg="RemoveContainer for \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\" returns successfully" Jun 20 19:17:14.748829 kubelet[3184]: I0620 19:17:14.748474 3184 scope.go:117] "RemoveContainer" containerID="85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5" Jun 20 19:17:14.751066 containerd[1733]: time="2025-06-20T19:17:14.751031510Z" level=info msg="RemoveContainer for \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\"" Jun 20 19:17:14.759603 containerd[1733]: time="2025-06-20T19:17:14.759565501Z" level=info msg="RemoveContainer for \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\" returns successfully" Jun 20 19:17:14.759756 kubelet[3184]: I0620 19:17:14.759737 3184 scope.go:117] "RemoveContainer" containerID="f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295" Jun 20 19:17:14.760902 containerd[1733]: time="2025-06-20T19:17:14.760867198Z" level=info msg="RemoveContainer for \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\"" Jun 20 19:17:14.767246 containerd[1733]: time="2025-06-20T19:17:14.767220046Z" level=info msg="RemoveContainer for \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\" returns successfully" Jun 20 19:17:14.767426 kubelet[3184]: I0620 19:17:14.767395 3184 scope.go:117] "RemoveContainer" containerID="f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533" Jun 20 19:17:14.768611 containerd[1733]: time="2025-06-20T19:17:14.768588285Z" level=info msg="RemoveContainer for \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\"" Jun 20 19:17:14.775509 containerd[1733]: time="2025-06-20T19:17:14.775471041Z" level=info msg="RemoveContainer for \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\" returns successfully" Jun 20 19:17:14.775670 kubelet[3184]: I0620 19:17:14.775655 3184 scope.go:117] "RemoveContainer" containerID="29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76" Jun 20 19:17:14.775878 containerd[1733]: time="2025-06-20T19:17:14.775848582Z" level=error msg="ContainerStatus for \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\": not found" Jun 20 19:17:14.775982 kubelet[3184]: E0620 19:17:14.775961 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\": not found" containerID="29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76" Jun 20 19:17:14.776067 kubelet[3184]: I0620 19:17:14.775990 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76"} err="failed to get container status \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\": rpc error: code = NotFound desc = an error occurred when try to find container \"29d777368ca722cb893cf657ad4d35a9562c6e733a679959aa70bf9c4cd7fc76\": not found" Jun 20 19:17:14.776126 kubelet[3184]: I0620 19:17:14.776068 3184 scope.go:117] "RemoveContainer" containerID="1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11" Jun 20 19:17:14.776278 containerd[1733]: time="2025-06-20T19:17:14.776234612Z" level=error msg="ContainerStatus for \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\": not found" Jun 20 19:17:14.776392 kubelet[3184]: E0620 19:17:14.776376 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\": not found" containerID="1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11" Jun 20 19:17:14.776438 kubelet[3184]: I0620 19:17:14.776397 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11"} err="failed to get container status \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\": rpc error: code = NotFound desc = an error occurred when try to find container \"1df6dbdc25c2a81a501ec404795100460dc49f092f096845cd8157bf6b486e11\": not found" Jun 20 19:17:14.776438 kubelet[3184]: I0620 19:17:14.776413 3184 scope.go:117] "RemoveContainer" containerID="85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5" Jun 20 19:17:14.776603 containerd[1733]: time="2025-06-20T19:17:14.776558838Z" level=error msg="ContainerStatus for \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\": not found" Jun 20 19:17:14.776680 kubelet[3184]: E0620 19:17:14.776665 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\": not found" containerID="85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5" Jun 20 19:17:14.776713 kubelet[3184]: I0620 19:17:14.776685 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5"} err="failed to get container status \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"85fde8c75ea6511767d30da8809c816ad31eea7d7142d6b77892af04586b42c5\": not found" Jun 20 19:17:14.776713 kubelet[3184]: I0620 19:17:14.776699 3184 scope.go:117] "RemoveContainer" containerID="f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295" Jun 20 19:17:14.776844 containerd[1733]: time="2025-06-20T19:17:14.776813601Z" level=error msg="ContainerStatus for \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\": not found" Jun 20 19:17:14.776916 kubelet[3184]: E0620 19:17:14.776886 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\": not found" containerID="f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295" Jun 20 19:17:14.776916 kubelet[3184]: I0620 19:17:14.776905 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295"} err="failed to get container status \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\": rpc error: code = NotFound desc = an error occurred when try to find container \"f62417e73fb0dcbc8f28523e4fa5604d8d728e8000eca2e8e78cbcc12123f295\": not found" Jun 20 19:17:14.776978 kubelet[3184]: I0620 19:17:14.776918 3184 scope.go:117] "RemoveContainer" containerID="f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533" Jun 20 19:17:14.777051 containerd[1733]: time="2025-06-20T19:17:14.777029229Z" level=error msg="ContainerStatus for \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\": not found" Jun 20 19:17:14.777124 kubelet[3184]: E0620 19:17:14.777108 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\": not found" containerID="f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533" Jun 20 19:17:14.777179 kubelet[3184]: I0620 19:17:14.777125 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533"} err="failed to get container status \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8e0fd694bed6b744a23985224fb8731f030e653de479efbebfbd7b7d28f3533\": not found" Jun 20 19:17:14.777179 kubelet[3184]: I0620 19:17:14.777139 3184 scope.go:117] "RemoveContainer" containerID="1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2" Jun 20 19:17:14.778285 containerd[1733]: time="2025-06-20T19:17:14.778261412Z" level=info msg="RemoveContainer for \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\"" Jun 20 19:17:14.781404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7c82b6e4d07f878430edc8c626e7879073e2d0d5ef76b79188b953001fbfcde-shm.mount: Deactivated successfully. Jun 20 19:17:14.781789 systemd[1]: var-lib-kubelet-pods-719c4aab\x2d1100\x2d4733\x2da991\x2df584034c069d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbx9b9.mount: Deactivated successfully. Jun 20 19:17:14.781864 systemd[1]: var-lib-kubelet-pods-719c4aab\x2d1100\x2d4733\x2da991\x2df584034c069d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:17:14.781923 systemd[1]: var-lib-kubelet-pods-719c4aab\x2d1100\x2d4733\x2da991\x2df584034c069d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:17:14.781978 systemd[1]: var-lib-kubelet-pods-d0893e93\x2d8404\x2d4561\x2d9e4f\x2dc0be422a4f64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjvpl.mount: Deactivated successfully. Jun 20 19:17:14.784898 containerd[1733]: time="2025-06-20T19:17:14.784876448Z" level=info msg="RemoveContainer for \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" returns successfully" Jun 20 19:17:14.785067 kubelet[3184]: I0620 19:17:14.785038 3184 scope.go:117] "RemoveContainer" containerID="1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2" Jun 20 19:17:14.785276 containerd[1733]: time="2025-06-20T19:17:14.785250598Z" level=error msg="ContainerStatus for \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\": not found" Jun 20 19:17:14.785413 kubelet[3184]: E0620 19:17:14.785385 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\": not found" containerID="1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2" Jun 20 19:17:14.785462 kubelet[3184]: I0620 19:17:14.785418 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2"} err="failed to get container status \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1003617ed7eb6de7600f380b3b485ef4cf24e88bfb6dd947f0e903e3e438cde2\": not found" Jun 20 19:17:15.771434 sshd[4705]: Connection closed by 10.200.16.10 port 51586 Jun 20 19:17:15.772054 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:15.774874 systemd[1]: sshd@21-10.200.4.34:22-10.200.16.10:51586.service: Deactivated successfully. Jun 20 19:17:15.776666 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:17:15.778020 systemd-logind[1704]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:17:15.779628 systemd-logind[1704]: Removed session 24. Jun 20 19:17:15.881838 systemd[1]: Started sshd@22-10.200.4.34:22-10.200.16.10:51588.service - OpenSSH per-connection server daemon (10.200.16.10:51588). Jun 20 19:17:16.329511 kubelet[3184]: I0620 19:17:16.329474 3184 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719c4aab-1100-4733-a991-f584034c069d" path="/var/lib/kubelet/pods/719c4aab-1100-4733-a991-f584034c069d/volumes" Jun 20 19:17:16.329986 kubelet[3184]: I0620 19:17:16.329965 3184 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0893e93-8404-4561-9e4f-c0be422a4f64" path="/var/lib/kubelet/pods/d0893e93-8404-4561-9e4f-c0be422a4f64/volumes" Jun 20 19:17:16.482664 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 51588 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:16.484202 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:16.488431 systemd-logind[1704]: New session 25 of user core. Jun 20 19:17:16.495516 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:17:17.293189 kubelet[3184]: I0620 19:17:17.292315 3184 memory_manager.go:355] "RemoveStaleState removing state" podUID="d0893e93-8404-4561-9e4f-c0be422a4f64" containerName="cilium-operator" Jun 20 19:17:17.293189 kubelet[3184]: I0620 19:17:17.292369 3184 memory_manager.go:355] "RemoveStaleState removing state" podUID="719c4aab-1100-4733-a991-f584034c069d" containerName="cilium-agent" Jun 20 19:17:17.304269 systemd[1]: Created slice kubepods-burstable-podfd881b64_fc1a_4e0f_a04e_0e9eeac61bcf.slice - libcontainer container kubepods-burstable-podfd881b64_fc1a_4e0f_a04e_0e9eeac61bcf.slice. Jun 20 19:17:17.350871 kubelet[3184]: I0620 19:17:17.350827 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-bpf-maps\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.350871 kubelet[3184]: I0620 19:17:17.350873 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-cilium-run\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351314 kubelet[3184]: I0620 19:17:17.350891 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-cni-path\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351314 kubelet[3184]: I0620 19:17:17.350907 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-clustermesh-secrets\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351314 kubelet[3184]: I0620 19:17:17.350923 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-host-proc-sys-net\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351314 kubelet[3184]: I0620 19:17:17.350939 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-lib-modules\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351314 kubelet[3184]: I0620 19:17:17.350954 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-cilium-ipsec-secrets\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351314 kubelet[3184]: I0620 19:17:17.350969 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-cilium-cgroup\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351481 kubelet[3184]: I0620 19:17:17.350987 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-etc-cni-netd\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351481 kubelet[3184]: I0620 19:17:17.351004 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-xtables-lock\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351481 kubelet[3184]: I0620 19:17:17.351025 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-host-proc-sys-kernel\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351481 kubelet[3184]: I0620 19:17:17.351043 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-hubble-tls\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351481 kubelet[3184]: I0620 19:17:17.351058 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqc2r\" (UniqueName: \"kubernetes.io/projected/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-kube-api-access-nqc2r\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351481 kubelet[3184]: I0620 19:17:17.351078 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-hostproc\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.351619 kubelet[3184]: I0620 19:17:17.351095 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf-cilium-config-path\") pod \"cilium-w6795\" (UID: \"fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf\") " pod="kube-system/cilium-w6795" Jun 20 19:17:17.374336 sshd[4864]: Connection closed by 10.200.16.10 port 51588 Jun 20 19:17:17.374900 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:17.377618 systemd[1]: sshd@22-10.200.4.34:22-10.200.16.10:51588.service: Deactivated successfully. Jun 20 19:17:17.379464 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:17:17.381335 systemd-logind[1704]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:17:17.382247 systemd-logind[1704]: Removed session 25. Jun 20 19:17:17.487557 systemd[1]: Started sshd@23-10.200.4.34:22-10.200.16.10:51594.service - OpenSSH per-connection server daemon (10.200.16.10:51594). Jun 20 19:17:17.610240 containerd[1733]: time="2025-06-20T19:17:17.610084354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6795,Uid:fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf,Namespace:kube-system,Attempt:0,}" Jun 20 19:17:17.645021 containerd[1733]: time="2025-06-20T19:17:17.644972200Z" level=info msg="connecting to shim 2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3" address="unix:///run/containerd/s/ad1a343b254726ce2c0b2667d9a70daad238c5f41e95deed955ffba935d99f4b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:17:17.670533 systemd[1]: Started cri-containerd-2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3.scope - libcontainer container 2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3. Jun 20 19:17:17.698174 containerd[1733]: time="2025-06-20T19:17:17.698128658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6795,Uid:fd881b64-fc1a-4e0f-a04e-0e9eeac61bcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\"" Jun 20 19:17:17.701241 containerd[1733]: time="2025-06-20T19:17:17.701193704Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:17:17.714955 containerd[1733]: time="2025-06-20T19:17:17.714885609Z" level=info msg="Container f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:17:17.730872 containerd[1733]: time="2025-06-20T19:17:17.730841271Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45\"" Jun 20 19:17:17.731375 containerd[1733]: time="2025-06-20T19:17:17.731336276Z" level=info msg="StartContainer for \"f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45\"" Jun 20 19:17:17.732161 containerd[1733]: time="2025-06-20T19:17:17.732128816Z" level=info msg="connecting to shim f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45" address="unix:///run/containerd/s/ad1a343b254726ce2c0b2667d9a70daad238c5f41e95deed955ffba935d99f4b" protocol=ttrpc version=3 Jun 20 19:17:17.750563 systemd[1]: Started cri-containerd-f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45.scope - libcontainer container f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45. Jun 20 19:17:17.780627 containerd[1733]: time="2025-06-20T19:17:17.780564107Z" level=info msg="StartContainer for \"f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45\" returns successfully" Jun 20 19:17:17.784267 systemd[1]: cri-containerd-f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45.scope: Deactivated successfully. Jun 20 19:17:17.786910 containerd[1733]: time="2025-06-20T19:17:17.786815979Z" level=info msg="received exit event container_id:\"f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45\" id:\"f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45\" pid:4938 exited_at:{seconds:1750447037 nanos:786594700}" Jun 20 19:17:17.786910 containerd[1733]: time="2025-06-20T19:17:17.786876206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45\" id:\"f96ca3943c70334dca3ec3632ffe194761b33470e1b0f85f10ef222825b96b45\" pid:4938 exited_at:{seconds:1750447037 nanos:786594700}" Jun 20 19:17:18.083697 sshd[4879]: Accepted publickey for core from 10.200.16.10 port 51594 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:18.084883 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:18.089536 systemd-logind[1704]: New session 26 of user core. Jun 20 19:17:18.096557 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:17:18.459373 kubelet[3184]: E0620 19:17:18.459316 3184 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:17:18.507817 sshd[4971]: Connection closed by 10.200.16.10 port 51594 Jun 20 19:17:18.508344 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:18.512161 systemd[1]: sshd@23-10.200.4.34:22-10.200.16.10:51594.service: Deactivated successfully. Jun 20 19:17:18.515169 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:17:18.516894 systemd-logind[1704]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:17:18.517873 systemd-logind[1704]: Removed session 26. Jun 20 19:17:18.618901 systemd[1]: Started sshd@24-10.200.4.34:22-10.200.16.10:35844.service - OpenSSH per-connection server daemon (10.200.16.10:35844). Jun 20 19:17:18.741911 containerd[1733]: time="2025-06-20T19:17:18.741734241Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:17:18.766538 containerd[1733]: time="2025-06-20T19:17:18.764366591Z" level=info msg="Container f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:17:18.769693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326848276.mount: Deactivated successfully. Jun 20 19:17:18.778197 containerd[1733]: time="2025-06-20T19:17:18.778162013Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02\"" Jun 20 19:17:18.778652 containerd[1733]: time="2025-06-20T19:17:18.778632513Z" level=info msg="StartContainer for \"f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02\"" Jun 20 19:17:18.779830 containerd[1733]: time="2025-06-20T19:17:18.779740243Z" level=info msg="connecting to shim f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02" address="unix:///run/containerd/s/ad1a343b254726ce2c0b2667d9a70daad238c5f41e95deed955ffba935d99f4b" protocol=ttrpc version=3 Jun 20 19:17:18.798527 systemd[1]: Started cri-containerd-f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02.scope - libcontainer container f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02. Jun 20 19:17:18.828855 containerd[1733]: time="2025-06-20T19:17:18.828447252Z" level=info msg="StartContainer for \"f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02\" returns successfully" Jun 20 19:17:18.832462 systemd[1]: cri-containerd-f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02.scope: Deactivated successfully. Jun 20 19:17:18.832901 containerd[1733]: time="2025-06-20T19:17:18.832873056Z" level=info msg="received exit event container_id:\"f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02\" id:\"f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02\" pid:4993 exited_at:{seconds:1750447038 nanos:832695372}" Jun 20 19:17:18.834459 containerd[1733]: time="2025-06-20T19:17:18.833203278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02\" id:\"f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02\" pid:4993 exited_at:{seconds:1750447038 nanos:832695372}" Jun 20 19:17:18.848353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f13d8ffa3319f08a7fcaec34ab7dc69c63db9339656c5c912e885544388aeb02-rootfs.mount: Deactivated successfully. Jun 20 19:17:19.214350 sshd[4979]: Accepted publickey for core from 10.200.16.10 port 35844 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:19.215903 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:19.222218 systemd-logind[1704]: New session 27 of user core. Jun 20 19:17:19.226677 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:17:19.746113 containerd[1733]: time="2025-06-20T19:17:19.745993238Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:17:19.769098 containerd[1733]: time="2025-06-20T19:17:19.769054666Z" level=info msg="Container ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:17:19.785739 containerd[1733]: time="2025-06-20T19:17:19.785699621Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977\"" Jun 20 19:17:19.786319 containerd[1733]: time="2025-06-20T19:17:19.786292634Z" level=info msg="StartContainer for \"ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977\"" Jun 20 19:17:19.787915 containerd[1733]: time="2025-06-20T19:17:19.787857773Z" level=info msg="connecting to shim ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977" address="unix:///run/containerd/s/ad1a343b254726ce2c0b2667d9a70daad238c5f41e95deed955ffba935d99f4b" protocol=ttrpc version=3 Jun 20 19:17:19.811498 systemd[1]: Started cri-containerd-ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977.scope - libcontainer container ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977. Jun 20 19:17:19.840275 systemd[1]: cri-containerd-ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977.scope: Deactivated successfully. Jun 20 19:17:19.841694 containerd[1733]: time="2025-06-20T19:17:19.841559580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977\" id:\"ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977\" pid:5045 exited_at:{seconds:1750447039 nanos:841112272}" Jun 20 19:17:19.843693 containerd[1733]: time="2025-06-20T19:17:19.843256495Z" level=info msg="received exit event container_id:\"ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977\" id:\"ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977\" pid:5045 exited_at:{seconds:1750447039 nanos:841112272}" Jun 20 19:17:19.852381 containerd[1733]: time="2025-06-20T19:17:19.852341640Z" level=info msg="StartContainer for \"ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977\" returns successfully" Jun 20 19:17:19.863228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee90b502029e5f493c7fb4abb89df478c2bffc751d08931ab9db045eb659b977-rootfs.mount: Deactivated successfully. Jun 20 19:17:20.751348 containerd[1733]: time="2025-06-20T19:17:20.751259541Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:17:20.778393 containerd[1733]: time="2025-06-20T19:17:20.777946134Z" level=info msg="Container 7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:17:20.794912 containerd[1733]: time="2025-06-20T19:17:20.794871471Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c\"" Jun 20 19:17:20.795552 containerd[1733]: time="2025-06-20T19:17:20.795452255Z" level=info msg="StartContainer for \"7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c\"" Jun 20 19:17:20.796342 containerd[1733]: time="2025-06-20T19:17:20.796302541Z" level=info msg="connecting to shim 7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c" address="unix:///run/containerd/s/ad1a343b254726ce2c0b2667d9a70daad238c5f41e95deed955ffba935d99f4b" protocol=ttrpc version=3 Jun 20 19:17:20.820507 systemd[1]: Started cri-containerd-7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c.scope - libcontainer container 7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c. Jun 20 19:17:20.842239 systemd[1]: cri-containerd-7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c.scope: Deactivated successfully. Jun 20 19:17:20.843642 containerd[1733]: time="2025-06-20T19:17:20.843590311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c\" id:\"7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c\" pid:5088 exited_at:{seconds:1750447040 nanos:843307664}" Jun 20 19:17:20.847017 containerd[1733]: time="2025-06-20T19:17:20.846852030Z" level=info msg="received exit event container_id:\"7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c\" id:\"7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c\" pid:5088 exited_at:{seconds:1750447040 nanos:843307664}" Jun 20 19:17:20.853138 containerd[1733]: time="2025-06-20T19:17:20.853111243Z" level=info msg="StartContainer for \"7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c\" returns successfully" Jun 20 19:17:20.864604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d6eea78d43879ae3a1a7deb4bdbf407f8e26192283ee8c4e25edcc9ab36682c-rootfs.mount: Deactivated successfully. Jun 20 19:17:21.756309 containerd[1733]: time="2025-06-20T19:17:21.756264427Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:17:21.775547 containerd[1733]: time="2025-06-20T19:17:21.775072569Z" level=info msg="Container dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:17:21.780736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431756761.mount: Deactivated successfully. Jun 20 19:17:21.789872 containerd[1733]: time="2025-06-20T19:17:21.789834975Z" level=info msg="CreateContainer within sandbox \"2422bc9b8878dbc27889ead38288b2afe6a8594a7fa6aeea390330dec15152d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3\"" Jun 20 19:17:21.791454 containerd[1733]: time="2025-06-20T19:17:21.790288877Z" level=info msg="StartContainer for \"dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3\"" Jun 20 19:17:21.791454 containerd[1733]: time="2025-06-20T19:17:21.791125314Z" level=info msg="connecting to shim dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3" address="unix:///run/containerd/s/ad1a343b254726ce2c0b2667d9a70daad238c5f41e95deed955ffba935d99f4b" protocol=ttrpc version=3 Jun 20 19:17:21.811525 systemd[1]: Started cri-containerd-dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3.scope - libcontainer container dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3. Jun 20 19:17:21.842007 containerd[1733]: time="2025-06-20T19:17:21.841950702Z" level=info msg="StartContainer for \"dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3\" returns successfully" Jun 20 19:17:21.903319 containerd[1733]: time="2025-06-20T19:17:21.903272604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3\" id:\"a66efce3d53cb4032e7287e3dda9e9a82fadfa343412123f27353b3f536dc843\" pid:5157 exited_at:{seconds:1750447041 nanos:902782418}" Jun 20 19:17:22.179387 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jun 20 19:17:22.295643 kubelet[3184]: I0620 19:17:22.295586 3184 setters.go:602] "Node became not ready" node="ci-4344.1.0-a-abe1e11a87" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:17:22Z","lastTransitionTime":"2025-06-20T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:17:22.775906 kubelet[3184]: I0620 19:17:22.775846 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w6795" podStartSLOduration=5.775828142 podStartE2EDuration="5.775828142s" podCreationTimestamp="2025-06-20 19:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:17:22.775821124 +0000 UTC m=+164.537308397" watchObservedRunningTime="2025-06-20 19:17:22.775828142 +0000 UTC m=+164.537315414" Jun 20 19:17:23.722929 containerd[1733]: time="2025-06-20T19:17:23.722884536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3\" id:\"dd3f62d9c4cff56860e517db539ed0618bb0315276048f15bb0aea16550920dc\" pid:5299 exit_status:1 exited_at:{seconds:1750447043 nanos:722341058}" Jun 20 19:17:24.797563 systemd-networkd[1363]: lxc_health: Link UP Jun 20 19:17:24.801416 systemd-networkd[1363]: lxc_health: Gained carrier Jun 20 19:17:25.914177 containerd[1733]: time="2025-06-20T19:17:25.913529816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3\" id:\"388bfca6a8830ec12fe20065dcf4a8c5377b7aea47fc1fb7a0610064e126ad05\" pid:5689 exited_at:{seconds:1750447045 nanos:911491557}" Jun 20 19:17:25.917701 kubelet[3184]: E0620 19:17:25.917632 3184 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:35072->127.0.0.1:42217: write tcp 10.200.4.34:10250->10.200.4.34:38950: write: broken pipe Jun 20 19:17:26.440662 systemd-networkd[1363]: lxc_health: Gained IPv6LL Jun 20 19:17:28.054448 containerd[1733]: time="2025-06-20T19:17:28.054394553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3\" id:\"37f1c607d32d8730bc349bf51b49d751150327b42dc244fe169963d471a016fd\" pid:5721 exited_at:{seconds:1750447048 nanos:53851083}" Jun 20 19:17:28.057515 kubelet[3184]: E0620 19:17:28.057476 3184 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35086->127.0.0.1:42217: write tcp 127.0.0.1:35086->127.0.0.1:42217: write: broken pipe Jun 20 19:17:30.131223 containerd[1733]: time="2025-06-20T19:17:30.131173651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dde25a5f76137d5eed8f8edcd3e51d997bd4d18cceb8ff8a0032a5a97d0655b3\" id:\"833fc638ca352513e9b0939a1ecc0e7b451b9d2464ee695428610a755ba2bc99\" pid:5750 exited_at:{seconds:1750447050 nanos:130864917}" Jun 20 19:17:30.255375 sshd[5026]: Connection closed by 10.200.16.10 port 35844 Jun 20 19:17:30.256089 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:30.259576 systemd[1]: sshd@24-10.200.4.34:22-10.200.16.10:35844.service: Deactivated successfully. Jun 20 19:17:30.261249 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:17:30.262012 systemd-logind[1704]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:17:30.263502 systemd-logind[1704]: Removed session 27.