Jun 20 19:14:45.013588 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:14:45.013621 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:45.013632 kernel: BIOS-provided physical RAM map: Jun 20 19:14:45.013641 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 19:14:45.013649 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 20 19:14:45.013657 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jun 20 19:14:45.013666 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jun 20 19:14:45.013676 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jun 20 19:14:45.013685 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jun 20 19:14:45.013693 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 20 19:14:45.013701 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 20 19:14:45.013709 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 20 19:14:45.013718 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 20 19:14:45.013726 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 20 19:14:45.013738 kernel: NX (Execute Disable) protection: active Jun 20 19:14:45.013747 kernel: APIC: Static calls initialized Jun 20 19:14:45.013755 kernel: efi: EFI v2.7 by Microsoft Jun 20 19:14:45.013764 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9da518 RNG=0x3ffd2018 Jun 20 19:14:45.013773 kernel: random: crng init done Jun 20 19:14:45.013782 kernel: secureboot: Secure boot disabled Jun 20 19:14:45.013791 kernel: SMBIOS 3.1.0 present. Jun 20 19:14:45.013800 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jun 20 19:14:45.013813 kernel: DMI: Memory slots populated: 2/2 Jun 20 19:14:45.013823 kernel: Hypervisor detected: Microsoft Hyper-V Jun 20 19:14:45.013832 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 20 19:14:45.013841 kernel: Hyper-V: Nested features: 0x3e0101 Jun 20 19:14:45.013849 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 20 19:14:45.013858 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 20 19:14:45.013867 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:14:45.013876 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:14:45.013885 kernel: tsc: Detected 2300.000 MHz processor Jun 20 19:14:45.013894 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:14:45.013903 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:14:45.013915 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 20 19:14:45.013924 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 19:14:45.013933 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:14:45.013943 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 20 19:14:45.013952 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 20 19:14:45.013961 kernel: Using GB pages for direct mapping Jun 20 19:14:45.013970 kernel: ACPI: Early table checksum verification disabled Jun 20 19:14:45.013983 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 20 19:14:45.013994 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:45.014003 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:45.014013 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 19:14:45.014022 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 20 19:14:45.014032 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:45.014042 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:45.014053 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:45.014062 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:14:45.014072 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:14:45.014081 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:14:45.014091 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 20 19:14:45.014101 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jun 20 19:14:45.014110 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 20 19:14:45.014120 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 20 19:14:45.014129 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 20 19:14:45.014140 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 20 19:14:45.014149 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 20 19:14:45.014159 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 20 19:14:45.014168 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 20 19:14:45.014178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 20 19:14:45.014187 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 20 19:14:45.014197 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 20 19:14:45.014207 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 20 19:14:45.014215 kernel: Zone ranges: Jun 20 19:14:45.014225 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:14:45.014234 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:14:45.014242 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:14:45.014250 kernel: Device empty Jun 20 19:14:45.014258 kernel: Movable zone start for each node Jun 20 19:14:45.014266 kernel: Early memory node ranges Jun 20 19:14:45.014275 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 19:14:45.014283 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jun 20 19:14:45.014291 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jun 20 19:14:45.014301 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 20 19:14:45.014309 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:14:45.014317 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 20 19:14:45.016070 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:14:45.016085 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 19:14:45.016094 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jun 20 19:14:45.016101 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jun 20 19:14:45.016109 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 20 19:14:45.016116 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:14:45.016127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:14:45.016135 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:14:45.016143 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 20 19:14:45.016150 kernel: TSC deadline timer available Jun 20 19:14:45.016158 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:14:45.016166 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:14:45.016174 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:14:45.016182 kernel: CPU topo: Max. threads per core: 2 Jun 20 19:14:45.016190 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:14:45.016200 kernel: CPU topo: Num. threads per package: 2 Jun 20 19:14:45.016208 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:14:45.016216 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 20 19:14:45.016224 kernel: Booting paravirtualized kernel on Hyper-V Jun 20 19:14:45.016233 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:14:45.016241 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:14:45.016249 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:14:45.016258 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:14:45.016266 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:14:45.016276 kernel: Hyper-V: PV spinlocks enabled Jun 20 19:14:45.016284 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:14:45.016294 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:45.016303 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:14:45.016311 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 19:14:45.016319 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:14:45.016341 kernel: Fallback order for Node 0: 0 Jun 20 19:14:45.016350 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jun 20 19:14:45.016361 kernel: Policy zone: Normal Jun 20 19:14:45.016369 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:14:45.016377 kernel: software IO TLB: area num 2. Jun 20 19:14:45.016386 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:14:45.016394 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:14:45.016402 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:14:45.016411 kernel: Dynamic Preempt: voluntary Jun 20 19:14:45.016419 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:14:45.016431 kernel: rcu: RCU event tracing is enabled. Jun 20 19:14:45.016448 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:14:45.016457 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:14:45.016466 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:14:45.016477 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:14:45.016486 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:14:45.016495 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:14:45.016503 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:45.016512 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:45.016521 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:14:45.016530 kernel: Using NULL legacy PIC Jun 20 19:14:45.016541 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 20 19:14:45.016550 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:14:45.016558 kernel: Console: colour dummy device 80x25 Jun 20 19:14:45.016568 kernel: printk: legacy console [tty1] enabled Jun 20 19:14:45.016576 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:14:45.016585 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 20 19:14:45.016594 kernel: ACPI: Core revision 20240827 Jun 20 19:14:45.016604 kernel: Failed to register legacy timer interrupt Jun 20 19:14:45.016613 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:14:45.016622 kernel: x2apic enabled Jun 20 19:14:45.016631 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:14:45.016639 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jun 20 19:14:45.016648 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 19:14:45.016657 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 20 19:14:45.016667 kernel: Hyper-V: Using IPI hypercalls Jun 20 19:14:45.016675 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 20 19:14:45.016686 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 20 19:14:45.016695 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 20 19:14:45.016705 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 20 19:14:45.016713 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 20 19:14:45.016722 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 20 19:14:45.016731 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 20 19:14:45.016740 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jun 20 19:14:45.016749 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:14:45.016760 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:14:45.016769 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:14:45.016778 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:14:45.016786 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:14:45.016795 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:14:45.016803 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 19:14:45.017360 kernel: RETBleed: Vulnerable Jun 20 19:14:45.017372 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:14:45.017381 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:14:45.017391 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:14:45.017399 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:14:45.017412 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:14:45.017421 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 19:14:45.017430 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 19:14:45.017439 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 19:14:45.017448 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 20 19:14:45.017457 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 20 19:14:45.017466 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 20 19:14:45.017475 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:14:45.017483 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 20 19:14:45.017492 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 20 19:14:45.017501 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 20 19:14:45.017513 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 20 19:14:45.017522 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 20 19:14:45.017530 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 20 19:14:45.017540 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 20 19:14:45.017556 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:14:45.017566 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:14:45.017576 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:14:45.017586 kernel: landlock: Up and running. Jun 20 19:14:45.017597 kernel: SELinux: Initializing. Jun 20 19:14:45.017607 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:14:45.017617 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:14:45.017626 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 20 19:14:45.017638 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 20 19:14:45.017648 kernel: signal: max sigframe size: 11952 Jun 20 19:14:45.017657 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:14:45.017667 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:14:45.017676 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:14:45.017686 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:14:45.017695 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:14:45.017704 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:14:45.017713 kernel: .... node #0, CPUs: #1 Jun 20 19:14:45.017725 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:14:45.017735 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jun 20 19:14:45.017745 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299988K reserved, 0K cma-reserved) Jun 20 19:14:45.017754 kernel: devtmpfs: initialized Jun 20 19:14:45.017763 kernel: x86/mm: Memory block size: 128MB Jun 20 19:14:45.017772 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 20 19:14:45.017782 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:14:45.017791 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:14:45.017800 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:14:45.017813 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:14:45.017823 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:14:45.017832 kernel: audit: type=2000 audit(1750446881.031:1): state=initialized audit_enabled=0 res=1 Jun 20 19:14:45.017842 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:14:45.017851 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:14:45.017860 kernel: cpuidle: using governor menu Jun 20 19:14:45.017869 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:14:45.017880 kernel: dca service started, version 1.12.1 Jun 20 19:14:45.017888 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jun 20 19:14:45.017899 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jun 20 19:14:45.017908 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:14:45.017917 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:14:45.017925 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:14:45.017934 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:14:45.017943 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:14:45.017952 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:14:45.017961 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:14:45.017972 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:14:45.017981 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:14:45.017989 kernel: ACPI: Interpreter enabled Jun 20 19:14:45.017998 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:14:45.018007 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:14:45.018016 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:14:45.018025 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 19:14:45.018034 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 20 19:14:45.018043 kernel: iommu: Default domain type: Translated Jun 20 19:14:45.018052 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:14:45.018063 kernel: efivars: Registered efivars operations Jun 20 19:14:45.018072 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:14:45.018080 kernel: PCI: System does not support PCI Jun 20 19:14:45.018089 kernel: vgaarb: loaded Jun 20 19:14:45.018098 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:14:45.018107 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:14:45.018116 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:14:45.018125 kernel: pnp: PnP ACPI init Jun 20 19:14:45.018134 kernel: pnp: PnP ACPI: found 3 devices Jun 20 19:14:45.018144 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:14:45.018153 kernel: NET: Registered PF_INET protocol family Jun 20 19:14:45.018162 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:14:45.018171 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 19:14:45.018180 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:14:45.018189 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:14:45.018198 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:14:45.018207 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 19:14:45.018218 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:14:45.018226 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:14:45.018235 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:14:45.018244 kernel: NET: Registered PF_XDP protocol family Jun 20 19:14:45.018253 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:14:45.018261 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:14:45.018270 kernel: software IO TLB: mapped [mem 0x000000003a9da000-0x000000003e9da000] (64MB) Jun 20 19:14:45.018279 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 20 19:14:45.018288 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 20 19:14:45.018299 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 20 19:14:45.018308 kernel: clocksource: Switched to clocksource tsc Jun 20 19:14:45.018317 kernel: Initialise system trusted keyrings Jun 20 19:14:45.018334 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 19:14:45.018344 kernel: Key type asymmetric registered Jun 20 19:14:45.018352 kernel: Asymmetric key parser 'x509' registered Jun 20 19:14:45.018362 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:14:45.018370 kernel: io scheduler mq-deadline registered Jun 20 19:14:45.018402 kernel: io scheduler kyber registered Jun 20 19:14:45.018414 kernel: io scheduler bfq registered Jun 20 19:14:45.018422 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:14:45.018431 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:14:45.018441 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:14:45.018449 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 19:14:45.018457 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:14:45.018466 kernel: i8042: PNP: No PS/2 controller found. Jun 20 19:14:45.018615 kernel: rtc_cmos 00:02: registered as rtc0 Jun 20 19:14:45.018697 kernel: rtc_cmos 00:02: setting system clock to 2025-06-20T19:14:44 UTC (1750446884) Jun 20 19:14:45.018767 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 20 19:14:45.018778 kernel: intel_pstate: Intel P-state driver initializing Jun 20 19:14:45.018788 kernel: efifb: probing for efifb Jun 20 19:14:45.018797 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 19:14:45.018807 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 19:14:45.018816 kernel: efifb: scrolling: redraw Jun 20 19:14:45.018825 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:14:45.018835 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:14:45.018845 kernel: fb0: EFI VGA frame buffer device Jun 20 19:14:45.018853 kernel: pstore: Using crash dump compression: deflate Jun 20 19:14:45.018862 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:14:45.018871 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:14:45.018880 kernel: Segment Routing with IPv6 Jun 20 19:14:45.018889 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:14:45.018898 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:14:45.018907 kernel: Key type dns_resolver registered Jun 20 19:14:45.018916 kernel: IPI shorthand broadcast: enabled Jun 20 19:14:45.018927 kernel: sched_clock: Marking stable (3151005353, 111942152)->(3650266467, -387318962) Jun 20 19:14:45.018935 kernel: registered taskstats version 1 Jun 20 19:14:45.018944 kernel: Loading compiled-in X.509 certificates Jun 20 19:14:45.018953 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:14:45.018962 kernel: Demotion targets for Node 0: null Jun 20 19:14:45.018970 kernel: Key type .fscrypt registered Jun 20 19:14:45.018979 kernel: Key type fscrypt-provisioning registered Jun 20 19:14:45.018988 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:14:45.018997 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:14:45.019007 kernel: ima: No architecture policies found Jun 20 19:14:45.019016 kernel: clk: Disabling unused clocks Jun 20 19:14:45.019025 kernel: Warning: unable to open an initial console. Jun 20 19:14:45.019034 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:14:45.019043 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:14:45.019052 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:14:45.019060 kernel: Run /init as init process Jun 20 19:14:45.019069 kernel: with arguments: Jun 20 19:14:45.019078 kernel: /init Jun 20 19:14:45.019088 kernel: with environment: Jun 20 19:14:45.019097 kernel: HOME=/ Jun 20 19:14:45.019105 kernel: TERM=linux Jun 20 19:14:45.019114 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:14:45.019125 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:14:45.019138 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:14:45.019149 systemd[1]: Detected virtualization microsoft. Jun 20 19:14:45.019160 systemd[1]: Detected architecture x86-64. Jun 20 19:14:45.019169 systemd[1]: Running in initrd. Jun 20 19:14:45.019178 systemd[1]: No hostname configured, using default hostname. Jun 20 19:14:45.019188 systemd[1]: Hostname set to . Jun 20 19:14:45.019197 systemd[1]: Initializing machine ID from random generator. Jun 20 19:14:45.019207 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:14:45.019216 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:14:45.019226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:14:45.019238 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:14:45.019247 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:14:45.019257 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:14:45.019268 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:14:45.019278 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:14:45.019288 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:14:45.019298 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:14:45.019309 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:14:45.019319 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:14:45.019951 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:14:45.019961 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:14:45.019971 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:14:45.019981 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:14:45.019991 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:14:45.020001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:14:45.020010 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:14:45.020022 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:14:45.020032 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:14:45.020041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:14:45.020050 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:14:45.020060 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:14:45.020070 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:14:45.020079 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:14:45.020090 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:14:45.020101 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:14:45.020110 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:14:45.020120 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:14:45.020140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:45.020174 systemd-journald[205]: Collecting audit messages is disabled. Jun 20 19:14:45.020201 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:14:45.020212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:14:45.020226 systemd-journald[205]: Journal started Jun 20 19:14:45.020250 systemd-journald[205]: Runtime Journal (/run/log/journal/8014022dae104c71a42c4c23b554f064) is 8M, max 158.9M, 150.9M free. Jun 20 19:14:45.026346 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:14:45.026516 systemd-modules-load[206]: Inserted module 'overlay' Jun 20 19:14:45.030491 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:14:45.037662 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:14:45.045764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:14:45.064351 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:14:45.067087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:45.072015 kernel: Bridge firewalling registered Jun 20 19:14:45.071696 systemd-modules-load[206]: Inserted module 'br_netfilter' Jun 20 19:14:45.072243 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:14:45.076093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:14:45.080770 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:14:45.084749 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:14:45.087664 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:14:45.098784 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:14:45.101429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:14:45.112411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:14:45.116613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:14:45.121541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:14:45.126433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:14:45.131116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:14:45.149151 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:14:45.178431 systemd-resolved[245]: Positive Trust Anchors: Jun 20 19:14:45.178446 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:14:45.178482 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:14:45.199265 systemd-resolved[245]: Defaulting to hostname 'linux'. Jun 20 19:14:45.202197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:14:45.213315 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:14:45.227345 kernel: SCSI subsystem initialized Jun 20 19:14:45.235342 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:14:45.245341 kernel: iscsi: registered transport (tcp) Jun 20 19:14:45.263696 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:14:45.263760 kernel: QLogic iSCSI HBA Driver Jun 20 19:14:45.278842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:14:45.295301 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:14:45.296245 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:14:45.332009 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:14:45.336279 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:14:45.384353 kernel: raid6: avx512x4 gen() 44619 MB/s Jun 20 19:14:45.402336 kernel: raid6: avx512x2 gen() 43284 MB/s Jun 20 19:14:45.420336 kernel: raid6: avx512x1 gen() 25063 MB/s Jun 20 19:14:45.438336 kernel: raid6: avx2x4 gen() 34502 MB/s Jun 20 19:14:45.456335 kernel: raid6: avx2x2 gen() 35598 MB/s Jun 20 19:14:45.474089 kernel: raid6: avx2x1 gen() 25561 MB/s Jun 20 19:14:45.474175 kernel: raid6: using algorithm avx512x4 gen() 44619 MB/s Jun 20 19:14:45.494336 kernel: raid6: .... xor() 6467 MB/s, rmw enabled Jun 20 19:14:45.494351 kernel: raid6: using avx512x2 recovery algorithm Jun 20 19:14:45.513356 kernel: xor: automatically using best checksumming function avx Jun 20 19:14:45.635358 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:14:45.641040 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:14:45.643182 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:14:45.664702 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jun 20 19:14:45.668855 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:14:45.676721 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:14:45.692450 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jun 20 19:14:45.712244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:14:45.717414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:14:45.754195 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:14:45.760314 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:14:45.811395 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:14:45.821356 kernel: AES CTR mode by8 optimization enabled Jun 20 19:14:45.846620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:14:45.846696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:45.858532 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 19:14:45.850470 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:45.867459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:14:45.883869 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 19:14:45.883924 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 19:14:45.901842 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 19:14:45.903368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:45.907510 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 19:14:45.916381 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 19:14:45.918361 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:14:45.920349 kernel: scsi host0: storvsc_host_t Jun 20 19:14:45.923756 kernel: PTP clock support registered Jun 20 19:14:45.923791 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 19:14:45.927448 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 19:14:45.935349 kernel: hv_vmbus: registering driver hv_pci Jun 20 19:14:45.938433 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 19:14:45.938467 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 19:14:45.942305 kernel: hv_vmbus: registering driver hv_utils Jun 20 19:14:45.950270 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 19:14:45.950297 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 19:14:45.950453 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 19:14:45.955960 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dc0aa (unnamed net_device) (uninitialized): VF slot 1 added Jun 20 19:14:45.956164 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 19:14:45.956177 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 20 19:14:45.956277 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 19:14:46.256701 systemd-resolved[245]: Clock change detected. Flushing caches. Jun 20 19:14:46.261505 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 20 19:14:46.266271 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 20 19:14:46.266440 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:14:46.270572 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 19:14:46.271384 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:14:46.271518 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 20 19:14:46.274507 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 19:14:46.274843 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 20 19:14:46.293511 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 20 19:14:46.296710 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:14:46.296924 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 20 19:14:46.308506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#116 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:14:46.319936 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 20 19:14:46.320136 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 20 19:14:46.332188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#230 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:14:46.583541 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 19:14:46.589539 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:46.874632 kernel: nvme nvme0: using unchecked data buffer Jun 20 19:14:47.112140 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 20 19:14:47.140681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:14:47.154060 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 20 19:14:47.159730 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:14:47.169008 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:14:47.169919 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:14:47.170101 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:14:47.170222 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:14:47.170242 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:14:47.170951 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:14:47.173587 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:14:47.205463 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:14:47.210510 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:47.284520 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 20 19:14:47.294907 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 20 19:14:47.295123 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 20 19:14:47.295241 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:14:47.301744 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 20 19:14:47.307510 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 20 19:14:47.312510 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 20 19:14:47.312564 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 20 19:14:47.340532 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:14:47.340726 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 20 19:14:47.347590 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 20 19:14:47.362175 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 20 19:14:47.375515 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jun 20 19:14:47.391512 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dc0aa eth0: VF registering: eth1 Jun 20 19:14:47.391691 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jun 20 19:14:47.396513 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jun 20 19:14:48.223550 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:14:48.223618 disk-uuid[674]: The operation has completed successfully. Jun 20 19:14:48.281227 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:14:48.281324 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:14:48.314696 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:14:48.329017 sh[713]: Success Jun 20 19:14:48.362093 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:14:48.362174 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:14:48.362189 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:14:48.372517 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:14:48.602152 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:14:48.609582 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:14:48.622222 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:14:48.636511 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:14:48.636640 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (726) Jun 20 19:14:48.639511 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:14:48.642167 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:48.643533 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:14:48.933396 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:14:48.934924 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:14:48.935336 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:14:48.937626 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:14:48.938451 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:14:48.986542 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (759) Jun 20 19:14:48.994167 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:48.994234 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:48.994249 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:14:49.032760 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:14:49.039225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:14:49.049508 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:49.054800 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:14:49.061610 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:14:49.081369 systemd-networkd[889]: lo: Link UP Jun 20 19:14:49.085627 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:14:49.081377 systemd-networkd[889]: lo: Gained carrier Jun 20 19:14:49.091623 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:14:49.091817 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dc0aa eth0: Data path switched to VF: enP30832s1 Jun 20 19:14:49.083070 systemd-networkd[889]: Enumeration completed Jun 20 19:14:49.083425 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:14:49.083458 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:49.083461 systemd-networkd[889]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:14:49.089275 systemd[1]: Reached target network.target - Network. Jun 20 19:14:49.094445 systemd-networkd[889]: enP30832s1: Link UP Jun 20 19:14:49.094523 systemd-networkd[889]: eth0: Link UP Jun 20 19:14:49.094740 systemd-networkd[889]: eth0: Gained carrier Jun 20 19:14:49.094752 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:14:49.103896 systemd-networkd[889]: enP30832s1: Gained carrier Jun 20 19:14:49.113527 systemd-networkd[889]: eth0: DHCPv4 address 10.200.4.8/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:14:49.922901 ignition[896]: Ignition 2.21.0 Jun 20 19:14:49.922920 ignition[896]: Stage: fetch-offline Jun 20 19:14:49.925545 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:14:49.923021 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:49.928647 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:14:49.923028 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:49.923131 ignition[896]: parsed url from cmdline: "" Jun 20 19:14:49.923133 ignition[896]: no config URL provided Jun 20 19:14:49.923138 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:14:49.923145 ignition[896]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:14:49.923149 ignition[896]: failed to fetch config: resource requires networking Jun 20 19:14:49.924170 ignition[896]: Ignition finished successfully Jun 20 19:14:49.953083 ignition[905]: Ignition 2.21.0 Jun 20 19:14:49.953094 ignition[905]: Stage: fetch Jun 20 19:14:49.953290 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:49.953298 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:49.953393 ignition[905]: parsed url from cmdline: "" Jun 20 19:14:49.953396 ignition[905]: no config URL provided Jun 20 19:14:49.953400 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:14:49.953407 ignition[905]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:14:49.953452 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 19:14:50.018027 ignition[905]: GET result: OK Jun 20 19:14:50.018105 ignition[905]: config has been read from IMDS userdata Jun 20 19:14:50.018134 ignition[905]: parsing config with SHA512: f43f2580ae095dd90e64e09b66eb95d6f1fa6f0e1299c315c67226a817d7b065a5dfbd6aeb1f40ad871997717d0a656eb37c521b9d4cca9ab51a1ed189065e7f Jun 20 19:14:50.025035 unknown[905]: fetched base config from "system" Jun 20 19:14:50.025044 unknown[905]: fetched base config from "system" Jun 20 19:14:50.025417 ignition[905]: fetch: fetch complete Jun 20 19:14:50.025049 unknown[905]: fetched user config from "azure" Jun 20 19:14:50.025421 ignition[905]: fetch: fetch passed Jun 20 19:14:50.028020 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:14:50.025462 ignition[905]: Ignition finished successfully Jun 20 19:14:50.032536 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:14:50.057780 ignition[911]: Ignition 2.21.0 Jun 20 19:14:50.057789 ignition[911]: Stage: kargs Jun 20 19:14:50.058024 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:50.058033 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:50.061882 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:14:50.059941 ignition[911]: kargs: kargs passed Jun 20 19:14:50.066269 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:14:50.059995 ignition[911]: Ignition finished successfully Jun 20 19:14:50.091513 ignition[918]: Ignition 2.21.0 Jun 20 19:14:50.091524 ignition[918]: Stage: disks Jun 20 19:14:50.091759 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:50.091767 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:50.096192 ignition[918]: disks: disks passed Jun 20 19:14:50.097417 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:14:50.096260 ignition[918]: Ignition finished successfully Jun 20 19:14:50.103973 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:14:50.106587 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:14:50.109222 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:14:50.112531 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:14:50.116545 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:14:50.121444 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:14:50.193137 systemd-fsck[927]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 19:14:50.200456 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:14:50.207416 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:14:50.529523 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:14:50.530789 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:14:50.533360 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:14:50.550957 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:14:50.555667 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:14:50.569629 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:14:50.575600 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:14:50.576111 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:14:50.582433 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:14:50.588512 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (936) Jun 20 19:14:50.588611 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:14:50.591942 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:50.595685 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:50.595721 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:14:50.601135 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:14:51.010654 systemd-networkd[889]: eth0: Gained IPv6LL Jun 20 19:14:51.067064 coreos-metadata[938]: Jun 20 19:14:51.066 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:14:51.070529 coreos-metadata[938]: Jun 20 19:14:51.070 INFO Fetch successful Jun 20 19:14:51.070529 coreos-metadata[938]: Jun 20 19:14:51.070 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:14:51.076175 systemd-networkd[889]: enP30832s1: Gained IPv6LL Jun 20 19:14:51.079632 coreos-metadata[938]: Jun 20 19:14:51.079 INFO Fetch successful Jun 20 19:14:51.096877 coreos-metadata[938]: Jun 20 19:14:51.096 INFO wrote hostname ci-4344.1.0-a-324c5119a7 to /sysroot/etc/hostname Jun 20 19:14:51.100228 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:14:51.188149 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:14:51.223309 initrd-setup-root[973]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:14:51.243463 initrd-setup-root[980]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:14:51.248448 initrd-setup-root[987]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:14:52.160780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:14:52.165816 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:14:52.175631 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:14:52.181589 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:14:52.186279 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:52.208511 ignition[1054]: INFO : Ignition 2.21.0 Jun 20 19:14:52.208511 ignition[1054]: INFO : Stage: mount Jun 20 19:14:52.208511 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:52.208511 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:52.218886 ignition[1054]: INFO : mount: mount passed Jun 20 19:14:52.218886 ignition[1054]: INFO : Ignition finished successfully Jun 20 19:14:52.211476 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:14:52.215813 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:14:52.227815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:14:52.238437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:14:52.259824 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1067) Jun 20 19:14:52.259868 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:14:52.260941 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:14:52.261927 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:14:52.266943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:14:52.293044 ignition[1084]: INFO : Ignition 2.21.0 Jun 20 19:14:52.293044 ignition[1084]: INFO : Stage: files Jun 20 19:14:52.298532 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:52.298532 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:52.298532 ignition[1084]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:14:52.310524 ignition[1084]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:14:52.310524 ignition[1084]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:14:52.363181 ignition[1084]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:14:52.366595 ignition[1084]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:14:52.366595 ignition[1084]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:14:52.365017 unknown[1084]: wrote ssh authorized keys file for user: core Jun 20 19:14:52.374647 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:14:52.374647 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 20 19:14:52.400617 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:14:52.455919 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:14:52.455919 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:14:52.461781 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:14:53.045968 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:14:53.646784 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:14:53.651604 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:14:53.651604 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:14:53.651604 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:14:53.651604 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:14:53.651604 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:14:53.651604 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:14:53.651604 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:14:53.651604 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:14:53.685540 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:14:53.685540 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:14:53.685540 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:14:53.685540 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:14:53.685540 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:14:53.685540 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 20 19:14:54.436996 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:14:54.718052 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:14:54.718052 ignition[1084]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:14:54.748550 ignition[1084]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:14:54.759306 ignition[1084]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:14:54.759306 ignition[1084]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:14:54.770074 ignition[1084]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:14:54.770074 ignition[1084]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:14:54.770074 ignition[1084]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:14:54.770074 ignition[1084]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:14:54.770074 ignition[1084]: INFO : files: files passed Jun 20 19:14:54.770074 ignition[1084]: INFO : Ignition finished successfully Jun 20 19:14:54.761311 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:14:54.763928 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:14:54.770662 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:14:54.781249 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:14:54.781325 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:14:54.806614 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:14:54.806614 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:14:54.814692 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:14:54.806726 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:14:54.813008 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:14:54.821614 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:14:54.849972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:14:54.850091 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:14:54.854973 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:14:54.855151 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:14:54.855256 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:14:54.860635 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:14:54.890611 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:14:54.892771 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:14:54.916604 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:14:54.916914 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:14:54.917207 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:14:54.917570 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:14:54.917702 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:14:54.918222 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:14:54.918507 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:14:54.919050 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:14:54.919964 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:14:54.920167 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:14:54.930093 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:14:54.933073 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:14:54.936149 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:14:54.940668 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:14:54.941821 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:14:54.942116 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:14:54.949536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:14:54.949675 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:14:54.960511 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:14:54.961144 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:14:54.961644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:14:54.961896 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:14:54.961998 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:14:54.962139 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:14:54.962643 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:14:54.962759 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:14:54.962982 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:14:54.963089 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:14:54.963300 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:14:55.025904 ignition[1138]: INFO : Ignition 2.21.0 Jun 20 19:14:55.025904 ignition[1138]: INFO : Stage: umount Jun 20 19:14:55.025904 ignition[1138]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:14:55.025904 ignition[1138]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:14:55.025904 ignition[1138]: INFO : umount: umount passed Jun 20 19:14:55.025904 ignition[1138]: INFO : Ignition finished successfully Jun 20 19:14:54.963401 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:14:54.965585 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:14:54.965735 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:14:54.965850 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:14:54.967677 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:14:54.980190 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:14:54.980379 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:14:54.987694 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:14:54.987831 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:14:55.006371 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:14:55.006470 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:14:55.022020 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:14:55.022108 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:14:55.027717 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:14:55.027764 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:14:55.030270 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:14:55.030308 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:14:55.033594 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:14:55.033630 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:14:55.036038 systemd[1]: Stopped target network.target - Network. Jun 20 19:14:55.038537 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:14:55.038578 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:14:55.041390 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:14:55.043530 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:14:55.043741 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:14:55.047558 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:14:55.050568 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:14:55.051242 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:14:55.051280 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:14:55.057567 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:14:55.057599 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:14:55.061549 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:14:55.061602 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:14:55.065569 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:14:55.065609 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:14:55.068373 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:14:55.072601 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:14:55.079598 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:14:55.079718 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:14:55.088079 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:14:55.088222 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:14:55.088302 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:14:55.093708 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:14:55.094181 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:14:55.096996 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:14:55.097036 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:14:55.101576 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:14:55.108539 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:14:55.108602 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:14:55.182587 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dc0aa eth0: Data path switched from VF: enP30832s1 Jun 20 19:14:55.182781 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:14:55.111452 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:14:55.111500 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:14:55.114592 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:14:55.115689 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:14:55.122215 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:14:55.122266 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:14:55.161286 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:14:55.168622 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:14:55.168698 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:14:55.182947 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:14:55.183353 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:14:55.188945 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:14:55.189030 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:14:55.195696 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:14:55.196418 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:14:55.196462 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:14:55.200965 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:14:55.201005 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:14:55.204537 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:14:55.204603 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:14:55.214541 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:14:55.214613 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:14:55.217439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:14:55.217481 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:14:55.222813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:14:55.223625 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:14:55.223676 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:14:55.227247 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:14:55.227303 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:14:55.228992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:14:55.229034 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:14:55.231097 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:14:55.231146 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:14:55.231185 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:14:55.238603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:14:55.238704 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:14:55.449768 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:14:55.449891 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:14:55.450930 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:14:55.451081 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:14:55.451164 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:14:55.453618 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:14:55.467421 systemd[1]: Switching root. Jun 20 19:14:55.526682 systemd-journald[205]: Journal stopped Jun 20 19:14:59.411138 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jun 20 19:14:59.411176 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:14:59.411189 kernel: SELinux: policy capability open_perms=1 Jun 20 19:14:59.411200 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:14:59.411210 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:14:59.411221 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:14:59.411234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:14:59.411244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:14:59.411256 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:14:59.411266 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:14:59.411277 kernel: audit: type=1403 audit(1750446896.918:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:14:59.411291 systemd[1]: Successfully loaded SELinux policy in 155.079ms. Jun 20 19:14:59.411304 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.172ms. Jun 20 19:14:59.411319 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:14:59.411332 systemd[1]: Detected virtualization microsoft. Jun 20 19:14:59.411341 systemd[1]: Detected architecture x86-64. Jun 20 19:14:59.411350 systemd[1]: Detected first boot. Jun 20 19:14:59.411360 systemd[1]: Hostname set to . Jun 20 19:14:59.411371 systemd[1]: Initializing machine ID from random generator. Jun 20 19:14:59.411381 zram_generator::config[1180]: No configuration found. Jun 20 19:14:59.411391 kernel: Guest personality initialized and is inactive Jun 20 19:14:59.411400 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 20 19:14:59.411411 kernel: Initialized host personality Jun 20 19:14:59.411419 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:14:59.411429 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:14:59.411441 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:14:59.411450 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:14:59.411460 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:14:59.411469 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:14:59.411479 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:14:59.411488 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:14:59.411518 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:14:59.411530 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:14:59.411540 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:14:59.411550 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:14:59.411560 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:14:59.411570 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:14:59.411581 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:14:59.411591 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:14:59.411602 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:14:59.411614 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:14:59.411626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:14:59.411637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:14:59.411647 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:14:59.411658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:14:59.411668 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:14:59.411678 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:14:59.411689 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:14:59.411701 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:14:59.411711 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:14:59.411721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:14:59.411731 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:14:59.411742 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:14:59.411752 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:14:59.411763 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:14:59.411773 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:14:59.411786 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:14:59.411797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:14:59.411807 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:14:59.411817 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:14:59.411828 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:14:59.411842 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:14:59.411852 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:14:59.411863 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:14:59.411874 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:14:59.411884 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:14:59.411894 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:14:59.411904 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:14:59.411915 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:14:59.411928 systemd[1]: Reached target machines.target - Containers. Jun 20 19:14:59.411938 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:14:59.411949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:14:59.411961 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:14:59.411971 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:14:59.411981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:14:59.411991 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:14:59.412001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:14:59.412011 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:14:59.412024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:14:59.412034 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:14:59.412044 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:14:59.412055 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:14:59.412064 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:14:59.412074 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:14:59.412085 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:14:59.412095 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:14:59.412107 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:14:59.412118 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:14:59.412128 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:14:59.412138 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:14:59.412149 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:14:59.412159 kernel: loop: module loaded Jun 20 19:14:59.412169 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:14:59.412179 systemd[1]: Stopped verity-setup.service. Jun 20 19:14:59.412191 kernel: fuse: init (API version 7.41) Jun 20 19:14:59.412201 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:14:59.412211 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:14:59.412222 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:14:59.412232 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:14:59.412248 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:14:59.412258 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:14:59.412268 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:14:59.412279 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:14:59.412291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:14:59.412302 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:14:59.412312 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:14:59.412322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:14:59.412352 systemd-journald[1273]: Collecting audit messages is disabled. Jun 20 19:14:59.412378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:14:59.412390 systemd-journald[1273]: Journal started Jun 20 19:14:59.412414 systemd-journald[1273]: Runtime Journal (/run/log/journal/ac6349f38c064ef68a6b69f6c622522e) is 8M, max 158.9M, 150.9M free. Jun 20 19:14:58.966350 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:14:58.975228 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 19:14:58.975699 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:14:59.417541 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:14:59.421039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:14:59.421218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:14:59.423002 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:14:59.423153 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:14:59.425788 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:14:59.425948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:14:59.428903 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:14:59.431882 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:14:59.436755 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:14:59.442907 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:14:59.461022 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:14:59.469588 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:14:59.477583 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:14:59.482500 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:14:59.482539 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:14:59.485466 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:14:59.500677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:14:59.502822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:14:59.507282 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:14:59.507512 kernel: ACPI: bus type drm_connector registered Jun 20 19:14:59.511262 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:14:59.513775 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:14:59.514747 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:14:59.517171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:14:59.521889 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:14:59.525458 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:14:59.529735 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:14:59.538200 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:14:59.546651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:14:59.550038 systemd-journald[1273]: Time spent on flushing to /var/log/journal/ac6349f38c064ef68a6b69f6c622522e is 19.427ms for 983 entries. Jun 20 19:14:59.550038 systemd-journald[1273]: System Journal (/var/log/journal/ac6349f38c064ef68a6b69f6c622522e) is 8M, max 2.6G, 2.6G free. Jun 20 19:14:59.587073 systemd-journald[1273]: Received client request to flush runtime journal. Jun 20 19:14:59.550038 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:14:59.556833 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:14:59.561955 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:14:59.576957 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:14:59.579299 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:14:59.586639 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:14:59.589390 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:14:59.597553 kernel: loop0: detected capacity change from 0 to 113872 Jun 20 19:14:59.627786 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:14:59.644732 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:14:59.942398 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:14:59.947547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:14:59.976984 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:14:59.996524 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:15:00.015528 kernel: loop1: detected capacity change from 0 to 229808 Jun 20 19:15:00.041386 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jun 20 19:15:00.041403 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jun 20 19:15:00.045194 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:15:00.064584 kernel: loop2: detected capacity change from 0 to 28496 Jun 20 19:15:00.399957 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:15:00.404975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:15:00.433623 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Jun 20 19:15:00.453516 kernel: loop3: detected capacity change from 0 to 146240 Jun 20 19:15:00.597563 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:15:00.604714 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:15:00.647545 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:15:00.680531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#91 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:15:00.691693 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:15:00.763394 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:15:00.844520 kernel: loop4: detected capacity change from 0 to 113872 Jun 20 19:15:00.846529 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:15:00.864520 kernel: hv_vmbus: registering driver hv_balloon Jun 20 19:15:00.871528 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 19:15:00.874529 kernel: loop5: detected capacity change from 0 to 229808 Jun 20 19:15:00.880675 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 19:15:00.880751 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 19:15:00.882771 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:15:00.886556 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 19:15:00.886616 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:15:00.899514 kernel: loop6: detected capacity change from 0 to 28496 Jun 20 19:15:00.924406 systemd-networkd[1355]: lo: Link UP Jun 20 19:15:00.924419 systemd-networkd[1355]: lo: Gained carrier Jun 20 19:15:00.926519 systemd-networkd[1355]: Enumeration completed Jun 20 19:15:00.926627 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:15:00.930120 systemd-networkd[1355]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:15:00.930133 systemd-networkd[1355]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:15:00.930482 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:15:00.934599 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:15:00.942851 kernel: loop7: detected capacity change from 0 to 146240 Jun 20 19:15:00.942928 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:15:00.948522 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:15:00.956325 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e521dc0aa eth0: Data path switched to VF: enP30832s1 Jun 20 19:15:00.957026 systemd-networkd[1355]: enP30832s1: Link UP Jun 20 19:15:00.957207 systemd-networkd[1355]: eth0: Link UP Jun 20 19:15:00.957792 systemd-networkd[1355]: eth0: Gained carrier Jun 20 19:15:00.958582 systemd-networkd[1355]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:15:00.962968 systemd-networkd[1355]: enP30832s1: Gained carrier Jun 20 19:15:00.968126 (sd-merge)[1401]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 19:15:00.974994 (sd-merge)[1401]: Merged extensions into '/usr'. Jun 20 19:15:00.977102 systemd-networkd[1355]: eth0: DHCPv4 address 10.200.4.8/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:15:00.982360 systemd[1]: Reload requested from client PID 1320 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:15:00.982379 systemd[1]: Reloading... Jun 20 19:15:01.093676 zram_generator::config[1458]: No configuration found. Jun 20 19:15:01.271689 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:15:01.294520 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 20 19:15:01.395506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:15:01.398989 systemd[1]: Reloading finished in 415 ms. Jun 20 19:15:01.422810 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:15:01.426851 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:15:01.466564 systemd[1]: Starting ensure-sysext.service... Jun 20 19:15:01.471704 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:15:01.476921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:15:01.481836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:15:01.503621 systemd[1]: Reload requested from client PID 1522 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:15:01.503743 systemd[1]: Reloading... Jun 20 19:15:01.505321 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:15:01.505342 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:15:01.505555 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:15:01.505781 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:15:01.506458 systemd-tmpfiles[1524]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:15:01.506733 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Jun 20 19:15:01.506779 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Jun 20 19:15:01.529806 systemd-tmpfiles[1524]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:15:01.529816 systemd-tmpfiles[1524]: Skipping /boot Jun 20 19:15:01.538363 systemd-tmpfiles[1524]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:15:01.538381 systemd-tmpfiles[1524]: Skipping /boot Jun 20 19:15:01.573532 zram_generator::config[1557]: No configuration found. Jun 20 19:15:01.672878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:15:01.772620 systemd[1]: Reloading finished in 268 ms. Jun 20 19:15:01.794381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:15:01.795057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:15:01.808625 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:15:01.811737 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:15:01.813563 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:15:01.820396 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:15:01.822288 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:15:01.832416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:01.832622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:15:01.836591 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:15:01.840038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:15:01.846790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:15:01.847353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:15:01.847463 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:15:01.847574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:01.850358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:01.852588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:15:01.852785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:15:01.852882 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:15:01.852974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:01.857125 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:01.857374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:15:01.860533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:15:01.860821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:15:01.860920 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:15:01.861071 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:15:01.861315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:15:01.871846 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:15:01.872056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:15:01.872926 systemd[1]: Finished ensure-sysext.service. Jun 20 19:15:01.887146 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:15:01.889929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:15:01.893844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:15:01.894012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:15:01.896422 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:15:01.896595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:15:01.907000 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:15:01.911215 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:15:01.911688 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:15:01.915716 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:15:01.915899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:15:01.953809 systemd-resolved[1626]: Positive Trust Anchors: Jun 20 19:15:01.953821 systemd-resolved[1626]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:15:01.953852 systemd-resolved[1626]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:15:01.957890 systemd-resolved[1626]: Using system hostname 'ci-4344.1.0-a-324c5119a7'. Jun 20 19:15:01.959580 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:15:01.961303 systemd[1]: Reached target network.target - Network. Jun 20 19:15:01.964660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:15:02.043752 augenrules[1662]: No rules Jun 20 19:15:02.044872 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:15:02.045106 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:15:02.274661 systemd-networkd[1355]: eth0: Gained IPv6LL Jun 20 19:15:02.277304 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:15:02.280784 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:15:02.318200 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:15:02.322771 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:15:02.530665 systemd-networkd[1355]: enP30832s1: Gained IPv6LL Jun 20 19:15:04.177524 ldconfig[1315]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:15:04.190346 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:15:04.195881 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:15:04.213170 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:15:04.217758 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:15:04.219315 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:15:04.220831 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:15:04.222348 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:15:04.225663 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:15:04.228588 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:15:04.231553 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:15:04.234575 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:15:04.234609 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:15:04.235767 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:15:04.255015 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:15:04.257827 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:15:04.262754 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:15:04.264680 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:15:04.266355 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:15:04.271325 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:15:04.272971 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:15:04.277134 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:15:04.279236 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:15:04.280583 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:15:04.281843 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:15:04.281869 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:15:04.284238 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 19:15:04.288577 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:15:04.293284 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:15:04.298691 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:15:04.304972 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:15:04.309053 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:15:04.314273 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:15:04.318587 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:15:04.323660 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:15:04.326187 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 20 19:15:04.328674 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 19:15:04.331057 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 19:15:04.333778 jq[1680]: false Jun 20 19:15:04.334140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:04.338684 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:15:04.344185 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:15:04.352560 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:15:04.356596 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:15:04.359888 KVP[1686]: KVP starting; pid is:1686 Jun 20 19:15:04.360652 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:15:04.366431 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:15:04.369422 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:15:04.370909 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:15:04.373258 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:15:04.377889 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:15:04.382056 kernel: hv_utils: KVP IC version 4.0 Jun 20 19:15:04.385542 KVP[1686]: KVP LIC Version: 3.1 Jun 20 19:15:04.391743 extend-filesystems[1681]: Found /dev/nvme0n1p6 Jun 20 19:15:04.388663 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:15:04.391682 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:15:04.391876 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:15:04.404605 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Refreshing passwd entry cache Jun 20 19:15:04.401692 oslogin_cache_refresh[1685]: Refreshing passwd entry cache Jun 20 19:15:04.415968 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:15:04.416572 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:15:04.424877 jq[1700]: true Jun 20 19:15:04.427249 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Failure getting users, quitting Jun 20 19:15:04.427249 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:15:04.427249 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Refreshing group entry cache Jun 20 19:15:04.426779 oslogin_cache_refresh[1685]: Failure getting users, quitting Jun 20 19:15:04.426797 oslogin_cache_refresh[1685]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:15:04.426837 oslogin_cache_refresh[1685]: Refreshing group entry cache Jun 20 19:15:04.433287 extend-filesystems[1681]: Found /dev/nvme0n1p9 Jun 20 19:15:04.437608 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:15:04.437813 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:15:04.445183 extend-filesystems[1681]: Checking size of /dev/nvme0n1p9 Jun 20 19:15:04.451717 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Failure getting groups, quitting Jun 20 19:15:04.451717 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:15:04.449977 oslogin_cache_refresh[1685]: Failure getting groups, quitting Jun 20 19:15:04.449990 oslogin_cache_refresh[1685]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:15:04.452220 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:15:04.452731 (chronyd)[1675]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 19:15:04.455477 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:15:04.457392 (ntainerd)[1721]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:15:04.465383 chronyd[1731]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 19:15:04.467107 jq[1724]: true Jun 20 19:15:04.476563 chronyd[1731]: Timezone right/UTC failed leap second check, ignoring Jun 20 19:15:04.476741 chronyd[1731]: Loaded seccomp filter (level 2) Jun 20 19:15:04.481359 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 19:15:04.483549 update_engine[1697]: I20250620 19:15:04.482175 1697 main.cc:92] Flatcar Update Engine starting Jun 20 19:15:04.488950 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:15:04.491091 tar[1710]: linux-amd64/LICENSE Jun 20 19:15:04.491091 tar[1710]: linux-amd64/helm Jun 20 19:15:04.511448 extend-filesystems[1681]: Old size kept for /dev/nvme0n1p9 Jun 20 19:15:04.508419 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:15:04.508657 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:15:04.556447 dbus-daemon[1678]: [system] SELinux support is enabled Jun 20 19:15:04.556875 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:15:04.562146 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:15:04.562188 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:15:04.564659 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:15:04.564690 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:15:04.570171 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:15:04.575097 update_engine[1697]: I20250620 19:15:04.570693 1697 update_check_scheduler.cc:74] Next update check in 5m9s Jun 20 19:15:04.598961 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:15:04.625838 bash[1756]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:15:04.627971 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:15:04.632623 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:15:04.661112 sshd_keygen[1727]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:15:04.665242 systemd-logind[1696]: New seat seat0. Jun 20 19:15:04.668323 systemd-logind[1696]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:15:04.668533 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:15:04.710644 coreos-metadata[1677]: Jun 20 19:15:04.710 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:15:04.714772 coreos-metadata[1677]: Jun 20 19:15:04.712 INFO Fetch successful Jun 20 19:15:04.715649 coreos-metadata[1677]: Jun 20 19:15:04.715 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 19:15:04.720519 coreos-metadata[1677]: Jun 20 19:15:04.719 INFO Fetch successful Jun 20 19:15:04.720519 coreos-metadata[1677]: Jun 20 19:15:04.720 INFO Fetching http://168.63.129.16/machine/11848574-31d1-41d9-b6f9-0dea97377ac7/4151dd94%2D6ea1%2D40c2%2D837a%2Dccf7139c3a36.%5Fci%2D4344.1.0%2Da%2D324c5119a7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 19:15:04.721682 coreos-metadata[1677]: Jun 20 19:15:04.721 INFO Fetch successful Jun 20 19:15:04.722459 coreos-metadata[1677]: Jun 20 19:15:04.722 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:15:04.747532 coreos-metadata[1677]: Jun 20 19:15:04.747 INFO Fetch successful Jun 20 19:15:04.835139 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:15:04.846505 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:15:04.852351 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 19:15:04.857227 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:15:04.863196 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:15:04.895465 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:15:04.897567 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:15:04.901084 locksmithd[1759]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:15:04.905581 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:15:04.918675 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 19:15:04.940862 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:15:04.944874 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:15:04.949851 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:15:04.952760 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:15:05.347133 tar[1710]: linux-amd64/README.md Jun 20 19:15:05.361158 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:15:05.489423 containerd[1721]: time="2025-06-20T19:15:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:15:05.489423 containerd[1721]: time="2025-06-20T19:15:05.489250099Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.496868207Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.095µs" Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.496904495Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.496924548Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497074506Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497088448Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497111908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497168634Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497180493Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497430135Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497442869Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497453024Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:15:05.497512 containerd[1721]: time="2025-06-20T19:15:05.497461450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:15:05.497879 containerd[1721]: time="2025-06-20T19:15:05.497867071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:15:05.498122 containerd[1721]: time="2025-06-20T19:15:05.498089735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:15:05.498181 containerd[1721]: time="2025-06-20T19:15:05.498122076Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:15:05.498181 containerd[1721]: time="2025-06-20T19:15:05.498132888Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:15:05.498181 containerd[1721]: time="2025-06-20T19:15:05.498162450Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:15:05.498410 containerd[1721]: time="2025-06-20T19:15:05.498383139Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:15:05.498674 containerd[1721]: time="2025-06-20T19:15:05.498473345Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:15:05.515756 containerd[1721]: time="2025-06-20T19:15:05.515676417Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:15:05.515989 containerd[1721]: time="2025-06-20T19:15:05.515907330Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.515935476Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516065611Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516081786Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516094914Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516110476Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516124993Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516138237Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516149900Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516160424Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516174718Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516319424Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516339087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516355776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:15:05.516937 containerd[1721]: time="2025-06-20T19:15:05.516371403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516385122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516397436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516411755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516424260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516439900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516452577Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516465293Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516558102Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516573960Z" level=info msg="Start snapshots syncer" Jun 20 19:15:05.517297 containerd[1721]: time="2025-06-20T19:15:05.516598220Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:15:05.517523 containerd[1721]: time="2025-06-20T19:15:05.516883596Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:15:05.517523 containerd[1721]: time="2025-06-20T19:15:05.517377829Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:15:05.517895 containerd[1721]: time="2025-06-20T19:15:05.517875807Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:15:05.518051 containerd[1721]: time="2025-06-20T19:15:05.518037445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:15:05.518122 containerd[1721]: time="2025-06-20T19:15:05.518110835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:15:05.518174 containerd[1721]: time="2025-06-20T19:15:05.518165528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:15:05.518220 containerd[1721]: time="2025-06-20T19:15:05.518208705Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:15:05.518301 containerd[1721]: time="2025-06-20T19:15:05.518289463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:15:05.518340 containerd[1721]: time="2025-06-20T19:15:05.518332410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:15:05.518387 containerd[1721]: time="2025-06-20T19:15:05.518378785Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:15:05.518452 containerd[1721]: time="2025-06-20T19:15:05.518442934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:15:05.518514 containerd[1721]: time="2025-06-20T19:15:05.518486786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:15:05.518562 containerd[1721]: time="2025-06-20T19:15:05.518549756Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:15:05.518622 containerd[1721]: time="2025-06-20T19:15:05.518614014Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:15:05.518677 containerd[1721]: time="2025-06-20T19:15:05.518665837Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518709731Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518727439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518740092Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518754565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518767700Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518791279Z" level=info msg="runtime interface created" Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518797310Z" level=info msg="created NRI interface" Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518810774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518827060Z" level=info msg="Connect containerd service" Jun 20 19:15:05.518923 containerd[1721]: time="2025-06-20T19:15:05.518867491Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:15:05.522537 containerd[1721]: time="2025-06-20T19:15:05.522488144Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:15:05.781776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:05.799871 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129517525Z" level=info msg="Start subscribing containerd event" Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129579620Z" level=info msg="Start recovering state" Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129874406Z" level=info msg="Start event monitor" Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129889306Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129897345Z" level=info msg="Start streaming server" Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129910528Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129917901Z" level=info msg="runtime interface starting up..." Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129924366Z" level=info msg="starting plugins..." Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.129940631Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.130148080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.130182860Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:15:06.130621 containerd[1721]: time="2025-06-20T19:15:06.130254858Z" level=info msg="containerd successfully booted in 0.642142s" Jun 20 19:15:06.130642 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:15:06.133953 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:15:06.135931 systemd[1]: Startup finished in 3.312s (kernel) + 11.716s (initrd) + 9.370s (userspace) = 24.398s. Jun 20 19:15:06.312557 login[1816]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:15:06.315723 login[1817]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:15:06.329046 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:15:06.332976 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:15:06.348430 systemd-logind[1696]: New session 2 of user core. Jun 20 19:15:06.355518 systemd-logind[1696]: New session 1 of user core. Jun 20 19:15:06.364083 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:15:06.370388 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:15:06.381380 (systemd)[1858]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:15:06.384772 systemd-logind[1696]: New session c1 of user core. Jun 20 19:15:06.480731 waagent[1814]: 2025-06-20T19:15:06.478121Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 19:15:06.481382 waagent[1814]: 2025-06-20T19:15:06.481332Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 19:15:06.483304 waagent[1814]: 2025-06-20T19:15:06.483265Z INFO Daemon Daemon Python: 3.11.12 Jun 20 19:15:06.485365 waagent[1814]: 2025-06-20T19:15:06.485316Z INFO Daemon Daemon Run daemon Jun 20 19:15:06.486886 waagent[1814]: 2025-06-20T19:15:06.486848Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 19:15:06.488304 waagent[1814]: 2025-06-20T19:15:06.488260Z INFO Daemon Daemon Using waagent for provisioning Jun 20 19:15:06.492584 waagent[1814]: 2025-06-20T19:15:06.492543Z INFO Daemon Daemon Activate resource disk Jun 20 19:15:06.496226 waagent[1814]: 2025-06-20T19:15:06.494784Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 19:15:06.500998 waagent[1814]: 2025-06-20T19:15:06.500958Z INFO Daemon Daemon Found device: None Jun 20 19:15:06.502317 waagent[1814]: 2025-06-20T19:15:06.502278Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 19:15:06.507583 waagent[1814]: 2025-06-20T19:15:06.507546Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 19:15:06.513420 waagent[1814]: 2025-06-20T19:15:06.513384Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:15:06.515740 waagent[1814]: 2025-06-20T19:15:06.515705Z INFO Daemon Daemon Running default provisioning handler Jun 20 19:15:06.529743 waagent[1814]: 2025-06-20T19:15:06.529696Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 19:15:06.541242 waagent[1814]: 2025-06-20T19:15:06.530872Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 19:15:06.541242 waagent[1814]: 2025-06-20T19:15:06.532560Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 19:15:06.541242 waagent[1814]: 2025-06-20T19:15:06.532638Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 19:15:06.546870 kubelet[1838]: E0620 19:15:06.546833 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:06.550107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:06.550246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:06.551801 systemd[1]: kubelet.service: Consumed 1.023s CPU time, 265.8M memory peak. Jun 20 19:15:06.587598 waagent[1814]: 2025-06-20T19:15:06.585714Z INFO Daemon Daemon Successfully mounted dvd Jun 20 19:15:06.598061 systemd[1858]: Queued start job for default target default.target. Jun 20 19:15:06.603774 systemd[1858]: Created slice app.slice - User Application Slice. Jun 20 19:15:06.603802 systemd[1858]: Reached target paths.target - Paths. Jun 20 19:15:06.603837 systemd[1858]: Reached target timers.target - Timers. Jun 20 19:15:06.605607 systemd[1858]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:15:06.615929 systemd[1858]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:15:06.616026 systemd[1858]: Reached target sockets.target - Sockets. Jun 20 19:15:06.616062 systemd[1858]: Reached target basic.target - Basic System. Jun 20 19:15:06.616136 systemd[1858]: Reached target default.target - Main User Target. Jun 20 19:15:06.616162 systemd[1858]: Startup finished in 223ms. Jun 20 19:15:06.616279 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 19:15:06.617082 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:15:06.618329 waagent[1814]: 2025-06-20T19:15:06.618273Z INFO Daemon Daemon Detect protocol endpoint Jun 20 19:15:06.620033 waagent[1814]: 2025-06-20T19:15:06.619911Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:15:06.623924 waagent[1814]: 2025-06-20T19:15:06.621864Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 19:15:06.623924 waagent[1814]: 2025-06-20T19:15:06.622787Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 19:15:06.623924 waagent[1814]: 2025-06-20T19:15:06.623228Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 19:15:06.623924 waagent[1814]: 2025-06-20T19:15:06.623437Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 19:15:06.629512 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:15:06.635465 waagent[1814]: 2025-06-20T19:15:06.635398Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 19:15:06.641427 waagent[1814]: 2025-06-20T19:15:06.636086Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 19:15:06.641427 waagent[1814]: 2025-06-20T19:15:06.636277Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 19:15:06.636823 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:15:06.730427 waagent[1814]: 2025-06-20T19:15:06.730330Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 19:15:06.732345 waagent[1814]: 2025-06-20T19:15:06.730998Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 19:15:06.740664 waagent[1814]: 2025-06-20T19:15:06.740616Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:15:06.757809 waagent[1814]: 2025-06-20T19:15:06.757766Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 19:15:06.762296 waagent[1814]: 2025-06-20T19:15:06.758548Z INFO Daemon Jun 20 19:15:06.762296 waagent[1814]: 2025-06-20T19:15:06.758821Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 10a63a20-d5e8-414e-aa44-4b3bc22401d9 eTag: 10289610480338394754 source: Fabric] Jun 20 19:15:06.762296 waagent[1814]: 2025-06-20T19:15:06.759095Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 19:15:06.762296 waagent[1814]: 2025-06-20T19:15:06.759408Z INFO Daemon Jun 20 19:15:06.762296 waagent[1814]: 2025-06-20T19:15:06.759824Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:15:06.765515 waagent[1814]: 2025-06-20T19:15:06.764896Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 19:15:06.840525 waagent[1814]: 2025-06-20T19:15:06.840436Z INFO Daemon Downloaded certificate {'thumbprint': '8929B8E61D972FDF722DA6BE10B5980716795A65', 'hasPrivateKey': True} Jun 20 19:15:06.843158 waagent[1814]: 2025-06-20T19:15:06.843120Z INFO Daemon Fetch goal state completed Jun 20 19:15:06.852253 waagent[1814]: 2025-06-20T19:15:06.852214Z INFO Daemon Daemon Starting provisioning Jun 20 19:15:06.854535 waagent[1814]: 2025-06-20T19:15:06.852839Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 19:15:06.854535 waagent[1814]: 2025-06-20T19:15:06.853111Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-324c5119a7] Jun 20 19:15:06.871635 waagent[1814]: 2025-06-20T19:15:06.871582Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-324c5119a7] Jun 20 19:15:06.876128 waagent[1814]: 2025-06-20T19:15:06.872363Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 19:15:06.876128 waagent[1814]: 2025-06-20T19:15:06.872730Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 19:15:06.880753 systemd-networkd[1355]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:15:06.880761 systemd-networkd[1355]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:15:06.880794 systemd-networkd[1355]: eth0: DHCP lease lost Jun 20 19:15:06.881737 waagent[1814]: 2025-06-20T19:15:06.881684Z INFO Daemon Daemon Create user account if not exists Jun 20 19:15:06.881888 waagent[1814]: 2025-06-20T19:15:06.881862Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 19:15:06.881964 waagent[1814]: 2025-06-20T19:15:06.881928Z INFO Daemon Daemon Configure sudoer Jun 20 19:15:06.889708 waagent[1814]: 2025-06-20T19:15:06.889338Z INFO Daemon Daemon Configure sshd Jun 20 19:15:06.893676 waagent[1814]: 2025-06-20T19:15:06.893628Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 19:15:06.895428 waagent[1814]: 2025-06-20T19:15:06.895098Z INFO Daemon Daemon Deploy ssh public key. Jun 20 19:15:06.899555 systemd-networkd[1355]: eth0: DHCPv4 address 10.200.4.8/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:15:07.982668 waagent[1814]: 2025-06-20T19:15:07.982601Z INFO Daemon Daemon Provisioning complete Jun 20 19:15:07.993613 waagent[1814]: 2025-06-20T19:15:07.993576Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 19:15:07.999123 waagent[1814]: 2025-06-20T19:15:07.994186Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 19:15:07.999123 waagent[1814]: 2025-06-20T19:15:07.994526Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 19:15:08.101842 waagent[1908]: 2025-06-20T19:15:08.101752Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 19:15:08.102185 waagent[1908]: 2025-06-20T19:15:08.101888Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 19:15:08.102185 waagent[1908]: 2025-06-20T19:15:08.101929Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 19:15:08.102185 waagent[1908]: 2025-06-20T19:15:08.101968Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 20 19:15:08.138858 waagent[1908]: 2025-06-20T19:15:08.138781Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 19:15:08.139025 waagent[1908]: 2025-06-20T19:15:08.138998Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:15:08.139080 waagent[1908]: 2025-06-20T19:15:08.139056Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:15:08.149382 waagent[1908]: 2025-06-20T19:15:08.149318Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:15:08.160939 waagent[1908]: 2025-06-20T19:15:08.160895Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 19:15:08.161354 waagent[1908]: 2025-06-20T19:15:08.161325Z INFO ExtHandler Jun 20 19:15:08.161422 waagent[1908]: 2025-06-20T19:15:08.161381Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: aa17e330-0a7a-4b5c-8f9c-d797a979fde6 eTag: 10289610480338394754 source: Fabric] Jun 20 19:15:08.161645 waagent[1908]: 2025-06-20T19:15:08.161616Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 19:15:08.162005 waagent[1908]: 2025-06-20T19:15:08.161978Z INFO ExtHandler Jun 20 19:15:08.162054 waagent[1908]: 2025-06-20T19:15:08.162020Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:15:08.168661 waagent[1908]: 2025-06-20T19:15:08.168625Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 19:15:08.237317 waagent[1908]: 2025-06-20T19:15:08.237194Z INFO ExtHandler Downloaded certificate {'thumbprint': '8929B8E61D972FDF722DA6BE10B5980716795A65', 'hasPrivateKey': True} Jun 20 19:15:08.237719 waagent[1908]: 2025-06-20T19:15:08.237685Z INFO ExtHandler Fetch goal state completed Jun 20 19:15:08.249057 waagent[1908]: 2025-06-20T19:15:08.248991Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 19:15:08.253757 waagent[1908]: 2025-06-20T19:15:08.253705Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1908 Jun 20 19:15:08.253884 waagent[1908]: 2025-06-20T19:15:08.253860Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 19:15:08.254148 waagent[1908]: 2025-06-20T19:15:08.254126Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 19:15:08.255260 waagent[1908]: 2025-06-20T19:15:08.255221Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 19:15:08.255632 waagent[1908]: 2025-06-20T19:15:08.255607Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 19:15:08.255755 waagent[1908]: 2025-06-20T19:15:08.255734Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 19:15:08.256169 waagent[1908]: 2025-06-20T19:15:08.256145Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 19:15:08.276428 waagent[1908]: 2025-06-20T19:15:08.276391Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 19:15:08.276608 waagent[1908]: 2025-06-20T19:15:08.276586Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 19:15:08.282224 waagent[1908]: 2025-06-20T19:15:08.282178Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 19:15:08.287776 systemd[1]: Reload requested from client PID 1923 ('systemctl') (unit waagent.service)... Jun 20 19:15:08.287789 systemd[1]: Reloading... Jun 20 19:15:08.356535 zram_generator::config[1957]: No configuration found. Jun 20 19:15:08.453245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:15:08.554139 systemd[1]: Reloading finished in 266 ms. Jun 20 19:15:08.567017 waagent[1908]: 2025-06-20T19:15:08.566431Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 19:15:08.567017 waagent[1908]: 2025-06-20T19:15:08.566614Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 19:15:08.681370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:15:08.879314 waagent[1908]: 2025-06-20T19:15:08.879184Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 19:15:08.879591 waagent[1908]: 2025-06-20T19:15:08.879561Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 19:15:08.880376 waagent[1908]: 2025-06-20T19:15:08.880340Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 19:15:08.880582 waagent[1908]: 2025-06-20T19:15:08.880545Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:15:08.880644 waagent[1908]: 2025-06-20T19:15:08.880624Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:15:08.880834 waagent[1908]: 2025-06-20T19:15:08.880812Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 19:15:08.881211 waagent[1908]: 2025-06-20T19:15:08.881183Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 19:15:08.881415 waagent[1908]: 2025-06-20T19:15:08.881393Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:15:08.881659 waagent[1908]: 2025-06-20T19:15:08.881633Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 19:15:08.881917 waagent[1908]: 2025-06-20T19:15:08.881890Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 19:15:08.881917 waagent[1908]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 19:15:08.881917 waagent[1908]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 19:15:08.881917 waagent[1908]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 19:15:08.881917 waagent[1908]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:15:08.881917 waagent[1908]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:15:08.881917 waagent[1908]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:15:08.882182 waagent[1908]: 2025-06-20T19:15:08.881922Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 19:15:08.882182 waagent[1908]: 2025-06-20T19:15:08.881966Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:15:08.882268 waagent[1908]: 2025-06-20T19:15:08.882219Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 19:15:08.882328 waagent[1908]: 2025-06-20T19:15:08.882309Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 19:15:08.882634 waagent[1908]: 2025-06-20T19:15:08.882613Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 19:15:08.883053 waagent[1908]: 2025-06-20T19:15:08.882755Z INFO EnvHandler ExtHandler Configure routes Jun 20 19:15:08.883587 waagent[1908]: 2025-06-20T19:15:08.883558Z INFO EnvHandler ExtHandler Gateway:None Jun 20 19:15:08.884306 waagent[1908]: 2025-06-20T19:15:08.884258Z INFO EnvHandler ExtHandler Routes:None Jun 20 19:15:08.902520 waagent[1908]: 2025-06-20T19:15:08.901619Z INFO ExtHandler ExtHandler Jun 20 19:15:08.902520 waagent[1908]: 2025-06-20T19:15:08.901704Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 03bb9f6c-5aa6-4079-b1be-0f52a137a84f correlation e96b7aab-88cc-4a92-a9eb-5aa981106a21 created: 2025-06-20T19:14:12.932106Z] Jun 20 19:15:08.902520 waagent[1908]: 2025-06-20T19:15:08.902059Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 19:15:08.902814 waagent[1908]: 2025-06-20T19:15:08.902779Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jun 20 19:15:08.921207 waagent[1908]: 2025-06-20T19:15:08.921155Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 19:15:08.921207 waagent[1908]: Executing ['ip', '-a', '-o', 'link']: Jun 20 19:15:08.921207 waagent[1908]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 19:15:08.921207 waagent[1908]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1d:c0:aa brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 20 19:15:08.921207 waagent[1908]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1d:c0:aa brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jun 20 19:15:08.921207 waagent[1908]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 19:15:08.921207 waagent[1908]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 19:15:08.921207 waagent[1908]: 2: eth0 inet 10.200.4.8/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 19:15:08.921207 waagent[1908]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 19:15:08.921207 waagent[1908]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 19:15:08.921207 waagent[1908]: 2: eth0 inet6 fe80::7e1e:52ff:fe1d:c0aa/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:15:08.921207 waagent[1908]: 3: enP30832s1 inet6 fe80::7e1e:52ff:fe1d:c0aa/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:15:08.946104 waagent[1908]: 2025-06-20T19:15:08.945873Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 19:15:08.946104 waagent[1908]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 19:15:08.946298 waagent[1908]: 2025-06-20T19:15:08.946249Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 651188BC-1435-40CF-8B06-D094BC29BE06;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 19:15:08.978434 waagent[1908]: 2025-06-20T19:15:08.978379Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 19:15:08.978434 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:15:08.978434 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:15:08.978434 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:15:08.978434 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:15:08.978434 waagent[1908]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jun 20 19:15:08.978434 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:15:08.978434 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:15:08.978434 waagent[1908]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:15:08.978434 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:15:08.981218 waagent[1908]: 2025-06-20T19:15:08.981167Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 19:15:08.981218 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:15:08.981218 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:15:08.981218 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:15:08.981218 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:15:08.981218 waagent[1908]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jun 20 19:15:08.981218 waagent[1908]: pkts bytes target prot opt in out source destination Jun 20 19:15:08.981218 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:15:08.981218 waagent[1908]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:15:08.981218 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:15:16.801368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:15:16.803233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:17.329913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:17.335708 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:17.371418 kubelet[2059]: E0620 19:15:17.371338 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:17.374878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:17.374987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:17.375321 systemd[1]: kubelet.service: Consumed 142ms CPU time, 108.1M memory peak. Jun 20 19:15:27.626037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:15:27.627833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:28.141743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:28.147718 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:28.182517 kubelet[2074]: E0620 19:15:28.182430 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:28.184735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:28.184869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:28.185167 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.3M memory peak. Jun 20 19:15:28.262650 chronyd[1731]: Selected source PHC0 Jun 20 19:15:30.000742 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:15:30.002090 systemd[1]: Started sshd@0-10.200.4.8:22-10.200.16.10:53844.service - OpenSSH per-connection server daemon (10.200.16.10:53844). Jun 20 19:15:30.796634 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 53844 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:30.798067 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:30.802778 systemd-logind[1696]: New session 3 of user core. Jun 20 19:15:30.808640 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:15:31.311114 systemd[1]: Started sshd@1-10.200.4.8:22-10.200.16.10:53852.service - OpenSSH per-connection server daemon (10.200.16.10:53852). Jun 20 19:15:31.902017 sshd[2087]: Accepted publickey for core from 10.200.16.10 port 53852 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:31.903464 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:31.908475 systemd-logind[1696]: New session 4 of user core. Jun 20 19:15:31.917629 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:15:32.326790 sshd[2089]: Connection closed by 10.200.16.10 port 53852 Jun 20 19:15:32.327743 sshd-session[2087]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:32.331635 systemd[1]: sshd@1-10.200.4.8:22-10.200.16.10:53852.service: Deactivated successfully. Jun 20 19:15:32.333200 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:15:32.333993 systemd-logind[1696]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:15:32.335109 systemd-logind[1696]: Removed session 4. Jun 20 19:15:32.435961 systemd[1]: Started sshd@2-10.200.4.8:22-10.200.16.10:53868.service - OpenSSH per-connection server daemon (10.200.16.10:53868). Jun 20 19:15:33.032572 sshd[2095]: Accepted publickey for core from 10.200.16.10 port 53868 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:33.033730 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:33.038762 systemd-logind[1696]: New session 5 of user core. Jun 20 19:15:33.047682 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:15:33.459633 sshd[2097]: Connection closed by 10.200.16.10 port 53868 Jun 20 19:15:33.460443 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:33.464260 systemd[1]: sshd@2-10.200.4.8:22-10.200.16.10:53868.service: Deactivated successfully. Jun 20 19:15:33.465870 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:15:33.466580 systemd-logind[1696]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:15:33.467893 systemd-logind[1696]: Removed session 5. Jun 20 19:15:33.563648 systemd[1]: Started sshd@3-10.200.4.8:22-10.200.16.10:53870.service - OpenSSH per-connection server daemon (10.200.16.10:53870). Jun 20 19:15:34.154689 sshd[2103]: Accepted publickey for core from 10.200.16.10 port 53870 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:34.156090 sshd-session[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:34.160573 systemd-logind[1696]: New session 6 of user core. Jun 20 19:15:34.166648 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:15:34.579824 sshd[2105]: Connection closed by 10.200.16.10 port 53870 Jun 20 19:15:34.580676 sshd-session[2103]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:34.583758 systemd[1]: sshd@3-10.200.4.8:22-10.200.16.10:53870.service: Deactivated successfully. Jun 20 19:15:34.585542 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:15:34.586896 systemd-logind[1696]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:15:34.588058 systemd-logind[1696]: Removed session 6. Jun 20 19:15:34.691886 systemd[1]: Started sshd@4-10.200.4.8:22-10.200.16.10:53886.service - OpenSSH per-connection server daemon (10.200.16.10:53886). Jun 20 19:15:35.290205 sshd[2111]: Accepted publickey for core from 10.200.16.10 port 53886 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:35.291629 sshd-session[2111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:35.296375 systemd-logind[1696]: New session 7 of user core. Jun 20 19:15:35.300654 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:15:35.737971 sudo[2114]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:15:35.738210 sudo[2114]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:35.750719 sudo[2114]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:35.845378 sshd[2113]: Connection closed by 10.200.16.10 port 53886 Jun 20 19:15:35.846321 sshd-session[2111]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:35.849865 systemd[1]: sshd@4-10.200.4.8:22-10.200.16.10:53886.service: Deactivated successfully. Jun 20 19:15:35.851518 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:15:35.853306 systemd-logind[1696]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:15:35.854232 systemd-logind[1696]: Removed session 7. Jun 20 19:15:35.968182 systemd[1]: Started sshd@5-10.200.4.8:22-10.200.16.10:53902.service - OpenSSH per-connection server daemon (10.200.16.10:53902). Jun 20 19:15:36.561553 sshd[2120]: Accepted publickey for core from 10.200.16.10 port 53902 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:36.563034 sshd-session[2120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:36.567749 systemd-logind[1696]: New session 8 of user core. Jun 20 19:15:36.572665 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:15:36.885996 sudo[2124]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:15:36.886238 sudo[2124]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:36.893396 sudo[2124]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:36.897826 sudo[2123]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:15:36.898052 sudo[2123]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:36.906187 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:15:36.946016 augenrules[2146]: No rules Jun 20 19:15:36.947099 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:15:36.947322 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:15:36.948837 sudo[2123]: pam_unix(sudo:session): session closed for user root Jun 20 19:15:37.050429 sshd[2122]: Connection closed by 10.200.16.10 port 53902 Jun 20 19:15:37.051134 sshd-session[2120]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:37.054557 systemd[1]: sshd@5-10.200.4.8:22-10.200.16.10:53902.service: Deactivated successfully. Jun 20 19:15:37.056124 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:15:37.057548 systemd-logind[1696]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:15:37.058731 systemd-logind[1696]: Removed session 8. Jun 20 19:15:37.156374 systemd[1]: Started sshd@6-10.200.4.8:22-10.200.16.10:53908.service - OpenSSH per-connection server daemon (10.200.16.10:53908). Jun 20 19:15:37.747339 sshd[2155]: Accepted publickey for core from 10.200.16.10 port 53908 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:15:37.748761 sshd-session[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:37.753565 systemd-logind[1696]: New session 9 of user core. Jun 20 19:15:37.763650 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:15:38.071779 sudo[2158]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:15:38.072018 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:15:38.331461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:15:38.333152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:39.068751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:39.074713 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:39.115023 kubelet[2180]: E0620 19:15:39.114951 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:39.117060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:39.117197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:39.117552 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.9M memory peak. Jun 20 19:15:39.636097 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:15:39.645830 (dockerd)[2192]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:15:40.395425 dockerd[2192]: time="2025-06-20T19:15:40.395336729Z" level=info msg="Starting up" Jun 20 19:15:40.398934 dockerd[2192]: time="2025-06-20T19:15:40.398891982Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:15:40.434828 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3411555320-merged.mount: Deactivated successfully. Jun 20 19:15:40.573682 dockerd[2192]: time="2025-06-20T19:15:40.573471313Z" level=info msg="Loading containers: start." Jun 20 19:15:40.616519 kernel: Initializing XFRM netlink socket Jun 20 19:15:40.889062 systemd-networkd[1355]: docker0: Link UP Jun 20 19:15:40.902312 dockerd[2192]: time="2025-06-20T19:15:40.902267800Z" level=info msg="Loading containers: done." Jun 20 19:15:40.924043 dockerd[2192]: time="2025-06-20T19:15:40.923994476Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:15:40.924199 dockerd[2192]: time="2025-06-20T19:15:40.924087161Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:15:40.924199 dockerd[2192]: time="2025-06-20T19:15:40.924192919Z" level=info msg="Initializing buildkit" Jun 20 19:15:40.964148 dockerd[2192]: time="2025-06-20T19:15:40.964096151Z" level=info msg="Completed buildkit initialization" Jun 20 19:15:40.970477 dockerd[2192]: time="2025-06-20T19:15:40.970420750Z" level=info msg="Daemon has completed initialization" Jun 20 19:15:40.970477 dockerd[2192]: time="2025-06-20T19:15:40.970510334Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:15:40.970853 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:15:41.746985 containerd[1721]: time="2025-06-20T19:15:41.746940317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 19:15:43.978836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667284662.mount: Deactivated successfully. Jun 20 19:15:48.986940 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 20 19:15:49.331711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:15:49.333734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:15:49.746946 update_engine[1697]: I20250620 19:15:49.746712 1697 update_attempter.cc:509] Updating boot flags... Jun 20 19:15:53.587659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:15:53.600767 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:15:53.676094 kubelet[2480]: E0620 19:15:53.676042 2480 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:15:53.678189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:15:53.678328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:15:53.678692 systemd[1]: kubelet.service: Consumed 149ms CPU time, 110.5M memory peak. Jun 20 19:15:54.773970 containerd[1721]: time="2025-06-20T19:15:54.773915962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:54.776252 containerd[1721]: time="2025-06-20T19:15:54.776213924Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jun 20 19:15:54.778762 containerd[1721]: time="2025-06-20T19:15:54.778706690Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:54.782250 containerd[1721]: time="2025-06-20T19:15:54.782205778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:54.783108 containerd[1721]: time="2025-06-20T19:15:54.782890017Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 13.035905966s" Jun 20 19:15:54.783108 containerd[1721]: time="2025-06-20T19:15:54.782924197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 20 19:15:54.783609 containerd[1721]: time="2025-06-20T19:15:54.783578551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 19:15:57.369791 containerd[1721]: time="2025-06-20T19:15:57.369737050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:57.372319 containerd[1721]: time="2025-06-20T19:15:57.372251603Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jun 20 19:15:57.375408 containerd[1721]: time="2025-06-20T19:15:57.375364671Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:57.379550 containerd[1721]: time="2025-06-20T19:15:57.379484476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:57.380510 containerd[1721]: time="2025-06-20T19:15:57.380070565Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 2.59646322s" Jun 20 19:15:57.380510 containerd[1721]: time="2025-06-20T19:15:57.380103540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 20 19:15:57.380759 containerd[1721]: time="2025-06-20T19:15:57.380742806Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 19:15:59.638376 containerd[1721]: time="2025-06-20T19:15:59.638327650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:59.641519 containerd[1721]: time="2025-06-20T19:15:59.641470001Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jun 20 19:15:59.644849 containerd[1721]: time="2025-06-20T19:15:59.644803843Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:59.649087 containerd[1721]: time="2025-06-20T19:15:59.649045788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:59.649765 containerd[1721]: time="2025-06-20T19:15:59.649627911Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 2.268856994s" Jun 20 19:15:59.649765 containerd[1721]: time="2025-06-20T19:15:59.649662437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 20 19:15:59.650343 containerd[1721]: time="2025-06-20T19:15:59.650159493Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 19:16:01.349252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269683954.mount: Deactivated successfully. Jun 20 19:16:01.735637 containerd[1721]: time="2025-06-20T19:16:01.735514503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:01.738270 containerd[1721]: time="2025-06-20T19:16:01.738241742Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jun 20 19:16:01.740842 containerd[1721]: time="2025-06-20T19:16:01.740809049Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:01.744069 containerd[1721]: time="2025-06-20T19:16:01.744041753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:01.744418 containerd[1721]: time="2025-06-20T19:16:01.744393169Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.094207971s" Jun 20 19:16:01.744459 containerd[1721]: time="2025-06-20T19:16:01.744429429Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 20 19:16:01.744993 containerd[1721]: time="2025-06-20T19:16:01.744970301Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 19:16:02.397032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989565304.mount: Deactivated successfully. Jun 20 19:16:03.354392 containerd[1721]: time="2025-06-20T19:16:03.354338851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:03.357010 containerd[1721]: time="2025-06-20T19:16:03.356976127Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jun 20 19:16:03.361678 containerd[1721]: time="2025-06-20T19:16:03.361631388Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:03.365582 containerd[1721]: time="2025-06-20T19:16:03.365533630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:03.366442 containerd[1721]: time="2025-06-20T19:16:03.366204496Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.621204185s" Jun 20 19:16:03.366442 containerd[1721]: time="2025-06-20T19:16:03.366236797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 20 19:16:03.366826 containerd[1721]: time="2025-06-20T19:16:03.366785983Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:16:03.831560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 19:16:03.833243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:04.227150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494159739.mount: Deactivated successfully. Jun 20 19:16:04.364399 containerd[1721]: time="2025-06-20T19:16:04.364356515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:16:04.366558 containerd[1721]: time="2025-06-20T19:16:04.366521421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:16:04.369849 containerd[1721]: time="2025-06-20T19:16:04.369812345Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:16:04.370243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:04.374523 containerd[1721]: time="2025-06-20T19:16:04.373656838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:16:04.374810 containerd[1721]: time="2025-06-20T19:16:04.374780919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.007882054s" Jun 20 19:16:04.374853 containerd[1721]: time="2025-06-20T19:16:04.374820462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:16:04.375346 containerd[1721]: time="2025-06-20T19:16:04.375305343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 19:16:04.378796 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:16:04.412719 kubelet[2584]: E0620 19:16:04.412653 2584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:16:04.414691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:16:04.414829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:16:04.415185 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.7M memory peak. Jun 20 19:16:04.961961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204138582.mount: Deactivated successfully. Jun 20 19:16:06.501317 containerd[1721]: time="2025-06-20T19:16:06.501258715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:06.503622 containerd[1721]: time="2025-06-20T19:16:06.503587850Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jun 20 19:16:06.507023 containerd[1721]: time="2025-06-20T19:16:06.506986833Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:06.512344 containerd[1721]: time="2025-06-20T19:16:06.512282291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:06.513210 containerd[1721]: time="2025-06-20T19:16:06.513056811Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.137697022s" Jun 20 19:16:06.513210 containerd[1721]: time="2025-06-20T19:16:06.513086137Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 20 19:16:09.224285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:09.224802 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.7M memory peak. Jun 20 19:16:09.226841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:09.253458 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-9.scope)... Jun 20 19:16:09.253600 systemd[1]: Reloading... Jun 20 19:16:09.348524 zram_generator::config[2721]: No configuration found. Jun 20 19:16:09.443813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:16:09.551116 systemd[1]: Reloading finished in 297 ms. Jun 20 19:16:09.662768 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:16:09.662865 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:16:09.663129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:09.663193 systemd[1]: kubelet.service: Consumed 84ms CPU time, 83.2M memory peak. Jun 20 19:16:09.665439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:10.198217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:10.205759 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:16:10.242389 kubelet[2788]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:16:10.242389 kubelet[2788]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:16:10.242389 kubelet[2788]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:16:10.242806 kubelet[2788]: I0620 19:16:10.242449 2788 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:16:10.440832 kubelet[2788]: I0620 19:16:10.440793 2788 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:16:10.440832 kubelet[2788]: I0620 19:16:10.440819 2788 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:16:10.441095 kubelet[2788]: I0620 19:16:10.441082 2788 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:16:10.471798 kubelet[2788]: E0620 19:16:10.471397 2788 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:16:10.473538 kubelet[2788]: I0620 19:16:10.473212 2788 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:16:10.482356 kubelet[2788]: I0620 19:16:10.482335 2788 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:16:10.485402 kubelet[2788]: I0620 19:16:10.485380 2788 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:16:10.485626 kubelet[2788]: I0620 19:16:10.485603 2788 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:16:10.485793 kubelet[2788]: I0620 19:16:10.485624 2788 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-324c5119a7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:16:10.485911 kubelet[2788]: I0620 19:16:10.485797 2788 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:16:10.485911 kubelet[2788]: I0620 19:16:10.485806 2788 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:16:10.485955 kubelet[2788]: I0620 19:16:10.485938 2788 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:16:10.489269 kubelet[2788]: I0620 19:16:10.489247 2788 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:16:10.489269 kubelet[2788]: I0620 19:16:10.489268 2788 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:16:10.489477 kubelet[2788]: I0620 19:16:10.489293 2788 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:16:10.489477 kubelet[2788]: I0620 19:16:10.489308 2788 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:16:10.495953 kubelet[2788]: E0620 19:16:10.495912 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-324c5119a7&limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:16:10.496047 kubelet[2788]: I0620 19:16:10.496023 2788 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:16:10.496678 kubelet[2788]: I0620 19:16:10.496598 2788 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:16:10.498056 kubelet[2788]: W0620 19:16:10.497312 2788 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:16:10.500186 kubelet[2788]: I0620 19:16:10.499942 2788 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:16:10.500186 kubelet[2788]: I0620 19:16:10.500001 2788 server.go:1289] "Started kubelet" Jun 20 19:16:10.502240 kubelet[2788]: E0620 19:16:10.502040 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:16:10.503480 kubelet[2788]: I0620 19:16:10.503439 2788 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:16:10.504456 kubelet[2788]: I0620 19:16:10.504405 2788 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:16:10.504871 kubelet[2788]: I0620 19:16:10.504859 2788 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:16:10.506082 kubelet[2788]: I0620 19:16:10.505007 2788 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:16:10.508906 kubelet[2788]: E0620 19:16:10.507512 2788 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.8:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.8:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-324c5119a7.184ad63ed2cf1dff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-324c5119a7,UID:ci-4344.1.0-a-324c5119a7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-324c5119a7,},FirstTimestamp:2025-06-20 19:16:10.499964415 +0000 UTC m=+0.290320854,LastTimestamp:2025-06-20 19:16:10.499964415 +0000 UTC m=+0.290320854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-324c5119a7,}" Jun 20 19:16:10.511390 kubelet[2788]: I0620 19:16:10.511279 2788 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:16:10.511570 kubelet[2788]: I0620 19:16:10.511534 2788 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:16:10.512540 kubelet[2788]: I0620 19:16:10.512457 2788 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:16:10.514957 kubelet[2788]: I0620 19:16:10.514921 2788 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:16:10.515526 kubelet[2788]: E0620 19:16:10.515020 2788 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-324c5119a7\" not found" Jun 20 19:16:10.515526 kubelet[2788]: I0620 19:16:10.515071 2788 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:16:10.515526 kubelet[2788]: I0620 19:16:10.515113 2788 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:16:10.515526 kubelet[2788]: E0620 19:16:10.515414 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:16:10.515526 kubelet[2788]: E0620 19:16:10.515474 2788 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-324c5119a7?timeout=10s\": dial tcp 10.200.4.8:6443: connect: connection refused" interval="200ms" Jun 20 19:16:10.518375 kubelet[2788]: I0620 19:16:10.518357 2788 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:16:10.518465 kubelet[2788]: I0620 19:16:10.518447 2788 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:16:10.518843 kubelet[2788]: E0620 19:16:10.518826 2788 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:16:10.519766 kubelet[2788]: I0620 19:16:10.519750 2788 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:16:10.548290 kubelet[2788]: I0620 19:16:10.547635 2788 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:16:10.548290 kubelet[2788]: I0620 19:16:10.547651 2788 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:16:10.548290 kubelet[2788]: I0620 19:16:10.547669 2788 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:16:10.548598 kubelet[2788]: I0620 19:16:10.548488 2788 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:16:10.548598 kubelet[2788]: I0620 19:16:10.548529 2788 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:16:10.548598 kubelet[2788]: I0620 19:16:10.548546 2788 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:16:10.548598 kubelet[2788]: I0620 19:16:10.548554 2788 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:16:10.548598 kubelet[2788]: E0620 19:16:10.548587 2788 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:16:10.551228 kubelet[2788]: E0620 19:16:10.551199 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:16:10.557707 kubelet[2788]: I0620 19:16:10.557687 2788 policy_none.go:49] "None policy: Start" Jun 20 19:16:10.557707 kubelet[2788]: I0620 19:16:10.557707 2788 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:16:10.557816 kubelet[2788]: I0620 19:16:10.557718 2788 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:16:10.566113 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:16:10.573639 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:16:10.576272 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:16:10.587047 kubelet[2788]: E0620 19:16:10.587024 2788 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:16:10.587220 kubelet[2788]: I0620 19:16:10.587208 2788 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:16:10.587254 kubelet[2788]: I0620 19:16:10.587224 2788 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:16:10.587647 kubelet[2788]: I0620 19:16:10.587587 2788 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:16:10.589586 kubelet[2788]: E0620 19:16:10.589518 2788 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:16:10.589586 kubelet[2788]: E0620 19:16:10.589560 2788 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-324c5119a7\" not found" Jun 20 19:16:10.660855 systemd[1]: Created slice kubepods-burstable-pod680ccfe3caf89fcadbaf8466f1528c37.slice - libcontainer container kubepods-burstable-pod680ccfe3caf89fcadbaf8466f1528c37.slice. Jun 20 19:16:10.669278 kubelet[2788]: E0620 19:16:10.669094 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.672914 systemd[1]: Created slice kubepods-burstable-pod7af97a1242e9a19b2cb8b0a6c8449566.slice - libcontainer container kubepods-burstable-pod7af97a1242e9a19b2cb8b0a6c8449566.slice. Jun 20 19:16:10.684524 kubelet[2788]: E0620 19:16:10.684396 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.686735 systemd[1]: Created slice kubepods-burstable-poda8ffea1da190e1fca667675cc89f922d.slice - libcontainer container kubepods-burstable-poda8ffea1da190e1fca667675cc89f922d.slice. Jun 20 19:16:10.688344 kubelet[2788]: E0620 19:16:10.688318 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.690665 kubelet[2788]: I0620 19:16:10.690650 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.691041 kubelet[2788]: E0620 19:16:10.691020 2788 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.8:6443/api/v1/nodes\": dial tcp 10.200.4.8:6443: connect: connection refused" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716580 kubelet[2788]: I0620 19:16:10.716332 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716580 kubelet[2788]: I0620 19:16:10.716366 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716580 kubelet[2788]: I0620 19:16:10.716388 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/680ccfe3caf89fcadbaf8466f1528c37-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-324c5119a7\" (UID: \"680ccfe3caf89fcadbaf8466f1528c37\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716580 kubelet[2788]: I0620 19:16:10.716407 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716580 kubelet[2788]: I0620 19:16:10.716426 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716744 kubelet[2788]: I0620 19:16:10.716443 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716744 kubelet[2788]: I0620 19:16:10.716463 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8ffea1da190e1fca667675cc89f922d-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-324c5119a7\" (UID: \"a8ffea1da190e1fca667675cc89f922d\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716744 kubelet[2788]: I0620 19:16:10.716480 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/680ccfe3caf89fcadbaf8466f1528c37-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-324c5119a7\" (UID: \"680ccfe3caf89fcadbaf8466f1528c37\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716744 kubelet[2788]: I0620 19:16:10.716513 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/680ccfe3caf89fcadbaf8466f1528c37-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-324c5119a7\" (UID: \"680ccfe3caf89fcadbaf8466f1528c37\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.716744 kubelet[2788]: E0620 19:16:10.716538 2788 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-324c5119a7?timeout=10s\": dial tcp 10.200.4.8:6443: connect: connection refused" interval="400ms" Jun 20 19:16:10.893787 kubelet[2788]: I0620 19:16:10.893745 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.894230 kubelet[2788]: E0620 19:16:10.894187 2788 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.8:6443/api/v1/nodes\": dial tcp 10.200.4.8:6443: connect: connection refused" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:10.970785 containerd[1721]: time="2025-06-20T19:16:10.970739739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-324c5119a7,Uid:680ccfe3caf89fcadbaf8466f1528c37,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:10.985477 containerd[1721]: time="2025-06-20T19:16:10.985399670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-324c5119a7,Uid:7af97a1242e9a19b2cb8b0a6c8449566,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:10.991321 containerd[1721]: time="2025-06-20T19:16:10.991246498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-324c5119a7,Uid:a8ffea1da190e1fca667675cc89f922d,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:11.037609 containerd[1721]: time="2025-06-20T19:16:11.037543283Z" level=info msg="connecting to shim e8b7499f894d36b3c0a10c50b9171e412b3a0e4d1ed5d8477510773753fcf385" address="unix:///run/containerd/s/09fc6b47222f9cc856870b5876080f93d70116798fb33775902e92d5f24827d0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:11.059860 containerd[1721]: time="2025-06-20T19:16:11.059810988Z" level=info msg="connecting to shim 4cbb0fab243eac2298a55059fe7e52e121e154934a76f3654d667ddba3f9c9c6" address="unix:///run/containerd/s/e08c2423723f22b2911a1bda5a39c5ea8b8cb60e75736b0460134a011d02ba93" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:11.069738 systemd[1]: Started cri-containerd-e8b7499f894d36b3c0a10c50b9171e412b3a0e4d1ed5d8477510773753fcf385.scope - libcontainer container e8b7499f894d36b3c0a10c50b9171e412b3a0e4d1ed5d8477510773753fcf385. Jun 20 19:16:11.082761 containerd[1721]: time="2025-06-20T19:16:11.082720330Z" level=info msg="connecting to shim eb0d575391e72934b00af1060c5be0bdfdc51fefe117c4d21819806ea5cefd8a" address="unix:///run/containerd/s/012acc0182e13ed4737b29f5d8912c2693edc8e8175a84db4d80b9674e549d91" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:11.099842 systemd[1]: Started cri-containerd-4cbb0fab243eac2298a55059fe7e52e121e154934a76f3654d667ddba3f9c9c6.scope - libcontainer container 4cbb0fab243eac2298a55059fe7e52e121e154934a76f3654d667ddba3f9c9c6. Jun 20 19:16:11.114701 systemd[1]: Started cri-containerd-eb0d575391e72934b00af1060c5be0bdfdc51fefe117c4d21819806ea5cefd8a.scope - libcontainer container eb0d575391e72934b00af1060c5be0bdfdc51fefe117c4d21819806ea5cefd8a. Jun 20 19:16:11.117433 kubelet[2788]: E0620 19:16:11.117399 2788 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-324c5119a7?timeout=10s\": dial tcp 10.200.4.8:6443: connect: connection refused" interval="800ms" Jun 20 19:16:11.169944 containerd[1721]: time="2025-06-20T19:16:11.169812289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-324c5119a7,Uid:680ccfe3caf89fcadbaf8466f1528c37,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8b7499f894d36b3c0a10c50b9171e412b3a0e4d1ed5d8477510773753fcf385\"" Jun 20 19:16:11.181632 containerd[1721]: time="2025-06-20T19:16:11.181499590Z" level=info msg="CreateContainer within sandbox \"e8b7499f894d36b3c0a10c50b9171e412b3a0e4d1ed5d8477510773753fcf385\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:16:11.202883 containerd[1721]: time="2025-06-20T19:16:11.202851460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-324c5119a7,Uid:a8ffea1da190e1fca667675cc89f922d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb0d575391e72934b00af1060c5be0bdfdc51fefe117c4d21819806ea5cefd8a\"" Jun 20 19:16:11.205301 containerd[1721]: time="2025-06-20T19:16:11.205274033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-324c5119a7,Uid:7af97a1242e9a19b2cb8b0a6c8449566,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cbb0fab243eac2298a55059fe7e52e121e154934a76f3654d667ddba3f9c9c6\"" Jun 20 19:16:11.210421 containerd[1721]: time="2025-06-20T19:16:11.210389660Z" level=info msg="CreateContainer within sandbox \"eb0d575391e72934b00af1060c5be0bdfdc51fefe117c4d21819806ea5cefd8a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:16:11.213550 containerd[1721]: time="2025-06-20T19:16:11.213522872Z" level=info msg="Container 3058a2aa9b764ecd41d30a6a5a1748a8cf7b0204fa4cefe44fb62d34652711a6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:11.222244 containerd[1721]: time="2025-06-20T19:16:11.221863087Z" level=info msg="CreateContainer within sandbox \"4cbb0fab243eac2298a55059fe7e52e121e154934a76f3654d667ddba3f9c9c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:16:11.241060 containerd[1721]: time="2025-06-20T19:16:11.241024516Z" level=info msg="Container 13f0801ebfe84148a99519ec4f3ccd6819f64803ef2f4cf34eae6b9de0150e88: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:11.248847 containerd[1721]: time="2025-06-20T19:16:11.248800383Z" level=info msg="CreateContainer within sandbox \"e8b7499f894d36b3c0a10c50b9171e412b3a0e4d1ed5d8477510773753fcf385\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3058a2aa9b764ecd41d30a6a5a1748a8cf7b0204fa4cefe44fb62d34652711a6\"" Jun 20 19:16:11.249528 containerd[1721]: time="2025-06-20T19:16:11.249488932Z" level=info msg="StartContainer for \"3058a2aa9b764ecd41d30a6a5a1748a8cf7b0204fa4cefe44fb62d34652711a6\"" Jun 20 19:16:11.250421 containerd[1721]: time="2025-06-20T19:16:11.250390797Z" level=info msg="connecting to shim 3058a2aa9b764ecd41d30a6a5a1748a8cf7b0204fa4cefe44fb62d34652711a6" address="unix:///run/containerd/s/09fc6b47222f9cc856870b5876080f93d70116798fb33775902e92d5f24827d0" protocol=ttrpc version=3 Jun 20 19:16:11.259012 containerd[1721]: time="2025-06-20T19:16:11.258978466Z" level=info msg="CreateContainer within sandbox \"eb0d575391e72934b00af1060c5be0bdfdc51fefe117c4d21819806ea5cefd8a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"13f0801ebfe84148a99519ec4f3ccd6819f64803ef2f4cf34eae6b9de0150e88\"" Jun 20 19:16:11.259862 containerd[1721]: time="2025-06-20T19:16:11.259828221Z" level=info msg="StartContainer for \"13f0801ebfe84148a99519ec4f3ccd6819f64803ef2f4cf34eae6b9de0150e88\"" Jun 20 19:16:11.260902 containerd[1721]: time="2025-06-20T19:16:11.260877576Z" level=info msg="connecting to shim 13f0801ebfe84148a99519ec4f3ccd6819f64803ef2f4cf34eae6b9de0150e88" address="unix:///run/containerd/s/012acc0182e13ed4737b29f5d8912c2693edc8e8175a84db4d80b9674e549d91" protocol=ttrpc version=3 Jun 20 19:16:11.262716 containerd[1721]: time="2025-06-20T19:16:11.262695108Z" level=info msg="Container 4016f441b60b954b97caa4a16ad69be48e5af29dabf1961f5714c2037c9b13fe: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:11.270703 systemd[1]: Started cri-containerd-3058a2aa9b764ecd41d30a6a5a1748a8cf7b0204fa4cefe44fb62d34652711a6.scope - libcontainer container 3058a2aa9b764ecd41d30a6a5a1748a8cf7b0204fa4cefe44fb62d34652711a6. Jun 20 19:16:11.283586 containerd[1721]: time="2025-06-20T19:16:11.283553301Z" level=info msg="CreateContainer within sandbox \"4cbb0fab243eac2298a55059fe7e52e121e154934a76f3654d667ddba3f9c9c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4016f441b60b954b97caa4a16ad69be48e5af29dabf1961f5714c2037c9b13fe\"" Jun 20 19:16:11.284042 containerd[1721]: time="2025-06-20T19:16:11.284023010Z" level=info msg="StartContainer for \"4016f441b60b954b97caa4a16ad69be48e5af29dabf1961f5714c2037c9b13fe\"" Jun 20 19:16:11.285189 systemd[1]: Started cri-containerd-13f0801ebfe84148a99519ec4f3ccd6819f64803ef2f4cf34eae6b9de0150e88.scope - libcontainer container 13f0801ebfe84148a99519ec4f3ccd6819f64803ef2f4cf34eae6b9de0150e88. Jun 20 19:16:11.286262 containerd[1721]: time="2025-06-20T19:16:11.286223619Z" level=info msg="connecting to shim 4016f441b60b954b97caa4a16ad69be48e5af29dabf1961f5714c2037c9b13fe" address="unix:///run/containerd/s/e08c2423723f22b2911a1bda5a39c5ea8b8cb60e75736b0460134a011d02ba93" protocol=ttrpc version=3 Jun 20 19:16:11.297309 kubelet[2788]: I0620 19:16:11.296905 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:11.298507 kubelet[2788]: E0620 19:16:11.297772 2788 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.8:6443/api/v1/nodes\": dial tcp 10.200.4.8:6443: connect: connection refused" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:11.316781 systemd[1]: Started cri-containerd-4016f441b60b954b97caa4a16ad69be48e5af29dabf1961f5714c2037c9b13fe.scope - libcontainer container 4016f441b60b954b97caa4a16ad69be48e5af29dabf1961f5714c2037c9b13fe. Jun 20 19:16:11.355357 containerd[1721]: time="2025-06-20T19:16:11.355310668Z" level=info msg="StartContainer for \"3058a2aa9b764ecd41d30a6a5a1748a8cf7b0204fa4cefe44fb62d34652711a6\" returns successfully" Jun 20 19:16:11.371271 containerd[1721]: time="2025-06-20T19:16:11.371228024Z" level=info msg="StartContainer for \"13f0801ebfe84148a99519ec4f3ccd6819f64803ef2f4cf34eae6b9de0150e88\" returns successfully" Jun 20 19:16:11.409337 containerd[1721]: time="2025-06-20T19:16:11.409272176Z" level=info msg="StartContainer for \"4016f441b60b954b97caa4a16ad69be48e5af29dabf1961f5714c2037c9b13fe\" returns successfully" Jun 20 19:16:11.559805 kubelet[2788]: E0620 19:16:11.559699 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:11.562184 kubelet[2788]: E0620 19:16:11.562155 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:11.570186 kubelet[2788]: E0620 19:16:11.570156 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:12.101121 kubelet[2788]: I0620 19:16:12.101090 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:12.565167 kubelet[2788]: E0620 19:16:12.565058 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:12.566002 kubelet[2788]: E0620 19:16:12.565982 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:12.566305 kubelet[2788]: E0620 19:16:12.566291 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:12.997421 kubelet[2788]: E0620 19:16:12.997370 2788 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-324c5119a7\" not found" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:13.021619 kubelet[2788]: I0620 19:16:13.021481 2788 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:13.021619 kubelet[2788]: E0620 19:16:13.021530 2788 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.0-a-324c5119a7\": node \"ci-4344.1.0-a-324c5119a7\" not found" Jun 20 19:16:13.115626 kubelet[2788]: I0620 19:16:13.114994 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:13.120722 kubelet[2788]: E0620 19:16:13.120697 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-324c5119a7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:13.120851 kubelet[2788]: I0620 19:16:13.120756 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:13.122102 kubelet[2788]: E0620 19:16:13.122074 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:13.122102 kubelet[2788]: I0620 19:16:13.122097 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:13.123252 kubelet[2788]: E0620 19:16:13.123232 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-324c5119a7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:13.505726 kubelet[2788]: I0620 19:16:13.505674 2788 apiserver.go:52] "Watching apiserver" Jun 20 19:16:13.516085 kubelet[2788]: I0620 19:16:13.516055 2788 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:16:15.154595 systemd[1]: Reload requested from client PID 3071 ('systemctl') (unit session-9.scope)... Jun 20 19:16:15.154612 systemd[1]: Reloading... Jun 20 19:16:15.244551 zram_generator::config[3117]: No configuration found. Jun 20 19:16:15.323923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:16:15.434022 systemd[1]: Reloading finished in 279 ms. Jun 20 19:16:15.461266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:15.484637 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:16:15.484884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:15.484945 systemd[1]: kubelet.service: Consumed 629ms CPU time, 128.8M memory peak. Jun 20 19:16:15.486605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:16:15.902980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:16:15.914790 (kubelet)[3184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:16:15.957478 kubelet[3184]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:16:15.957478 kubelet[3184]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:16:15.957478 kubelet[3184]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:16:15.957915 kubelet[3184]: I0620 19:16:15.957571 3184 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:16:15.965537 kubelet[3184]: I0620 19:16:15.965288 3184 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:16:15.965537 kubelet[3184]: I0620 19:16:15.965308 3184 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:16:15.965689 kubelet[3184]: I0620 19:16:15.965572 3184 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:16:15.966543 kubelet[3184]: I0620 19:16:15.966523 3184 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 19:16:15.969544 kubelet[3184]: I0620 19:16:15.969457 3184 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:16:15.974691 kubelet[3184]: I0620 19:16:15.974533 3184 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:16:15.979592 kubelet[3184]: I0620 19:16:15.979350 3184 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:16:15.979919 kubelet[3184]: I0620 19:16:15.979897 3184 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:16:15.980233 kubelet[3184]: I0620 19:16:15.979976 3184 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-324c5119a7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:16:15.980366 kubelet[3184]: I0620 19:16:15.980358 3184 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:16:15.980432 kubelet[3184]: I0620 19:16:15.980427 3184 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:16:15.980521 kubelet[3184]: I0620 19:16:15.980516 3184 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:16:15.980716 kubelet[3184]: I0620 19:16:15.980709 3184 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:16:15.981104 kubelet[3184]: I0620 19:16:15.981004 3184 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:16:15.981189 kubelet[3184]: I0620 19:16:15.981183 3184 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:16:15.981239 kubelet[3184]: I0620 19:16:15.981234 3184 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:16:15.986885 kubelet[3184]: I0620 19:16:15.986869 3184 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:16:15.987521 kubelet[3184]: I0620 19:16:15.987509 3184 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:16:15.990458 kubelet[3184]: I0620 19:16:15.990087 3184 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:16:15.990458 kubelet[3184]: I0620 19:16:15.990131 3184 server.go:1289] "Started kubelet" Jun 20 19:16:15.990722 kubelet[3184]: I0620 19:16:15.990701 3184 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:16:15.991653 kubelet[3184]: I0620 19:16:15.991637 3184 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:16:15.993657 kubelet[3184]: I0620 19:16:15.991769 3184 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:16:15.993890 kubelet[3184]: I0620 19:16:15.993875 3184 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:16:15.995432 kubelet[3184]: I0620 19:16:15.995231 3184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:16:16.004027 kubelet[3184]: E0620 19:16:16.004001 3184 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:16:16.005515 kubelet[3184]: I0620 19:16:16.004576 3184 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:16:16.006797 kubelet[3184]: I0620 19:16:16.006774 3184 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:16:16.008768 kubelet[3184]: I0620 19:16:16.008747 3184 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:16:16.008867 kubelet[3184]: I0620 19:16:16.008857 3184 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:16:16.011588 kubelet[3184]: I0620 19:16:16.011573 3184 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:16:16.011818 kubelet[3184]: I0620 19:16:16.011801 3184 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:16:16.015579 kubelet[3184]: I0620 19:16:16.015562 3184 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:16:16.017597 kubelet[3184]: I0620 19:16:16.017571 3184 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:16:16.020071 kubelet[3184]: I0620 19:16:16.020018 3184 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:16:16.020544 kubelet[3184]: I0620 19:16:16.020527 3184 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:16:16.020626 kubelet[3184]: I0620 19:16:16.020559 3184 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:16:16.020626 kubelet[3184]: I0620 19:16:16.020567 3184 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:16:16.020626 kubelet[3184]: E0620 19:16:16.020601 3184 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085630 3184 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085649 3184 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085670 3184 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085799 3184 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085807 3184 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085825 3184 policy_none.go:49] "None policy: Start" Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085835 3184 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085844 3184 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:16:16.086701 kubelet[3184]: I0620 19:16:16.085933 3184 state_mem.go:75] "Updated machine memory state" Jun 20 19:16:16.096279 kubelet[3184]: E0620 19:16:16.096254 3184 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:16:16.097465 kubelet[3184]: I0620 19:16:16.097444 3184 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:16:16.097582 kubelet[3184]: I0620 19:16:16.097464 3184 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:16:16.098484 kubelet[3184]: I0620 19:16:16.098284 3184 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:16:16.099877 kubelet[3184]: E0620 19:16:16.099851 3184 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:16:16.121463 kubelet[3184]: I0620 19:16:16.121435 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.121805 kubelet[3184]: I0620 19:16:16.121720 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.122321 kubelet[3184]: I0620 19:16:16.122164 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.130440 kubelet[3184]: I0620 19:16:16.130422 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:16:16.134663 kubelet[3184]: I0620 19:16:16.134638 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:16:16.134753 kubelet[3184]: I0620 19:16:16.134747 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:16:16.172234 sudo[3221]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:16:16.172472 sudo[3221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:16:16.206910 kubelet[3184]: I0620 19:16:16.206876 3184 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.223888 kubelet[3184]: I0620 19:16:16.223819 3184 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.224290 kubelet[3184]: I0620 19:16:16.224217 3184 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.312503 kubelet[3184]: I0620 19:16:16.310920 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.312503 kubelet[3184]: I0620 19:16:16.311081 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8ffea1da190e1fca667675cc89f922d-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-324c5119a7\" (UID: \"a8ffea1da190e1fca667675cc89f922d\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.312878 kubelet[3184]: I0620 19:16:16.311106 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/680ccfe3caf89fcadbaf8466f1528c37-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-324c5119a7\" (UID: \"680ccfe3caf89fcadbaf8466f1528c37\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.312878 kubelet[3184]: I0620 19:16:16.312828 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.312878 kubelet[3184]: I0620 19:16:16.312852 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.313114 kubelet[3184]: I0620 19:16:16.312986 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/680ccfe3caf89fcadbaf8466f1528c37-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-324c5119a7\" (UID: \"680ccfe3caf89fcadbaf8466f1528c37\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.313114 kubelet[3184]: I0620 19:16:16.313023 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/680ccfe3caf89fcadbaf8466f1528c37-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-324c5119a7\" (UID: \"680ccfe3caf89fcadbaf8466f1528c37\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.313114 kubelet[3184]: I0620 19:16:16.313067 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.313114 kubelet[3184]: I0620 19:16:16.313088 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7af97a1242e9a19b2cb8b0a6c8449566-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" (UID: \"7af97a1242e9a19b2cb8b0a6c8449566\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:16.711029 sudo[3221]: pam_unix(sudo:session): session closed for user root Jun 20 19:16:16.986511 kubelet[3184]: I0620 19:16:16.986372 3184 apiserver.go:52] "Watching apiserver" Jun 20 19:16:17.009598 kubelet[3184]: I0620 19:16:17.009531 3184 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:16:17.058349 kubelet[3184]: I0620 19:16:17.058315 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:17.059167 kubelet[3184]: I0620 19:16:17.059132 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:17.059510 kubelet[3184]: I0620 19:16:17.059472 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:17.082709 kubelet[3184]: I0620 19:16:17.082682 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:16:17.083062 kubelet[3184]: E0620 19:16:17.082873 3184 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-324c5119a7\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:17.083406 kubelet[3184]: I0620 19:16:17.083393 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:16:17.083534 kubelet[3184]: E0620 19:16:17.083524 3184 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-324c5119a7\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:17.090646 kubelet[3184]: I0620 19:16:17.090612 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:16:17.090739 kubelet[3184]: E0620 19:16:17.090673 3184 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-324c5119a7\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" Jun 20 19:16:17.103117 kubelet[3184]: I0620 19:16:17.102902 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-324c5119a7" podStartSLOduration=1.102885644 podStartE2EDuration="1.102885644s" podCreationTimestamp="2025-06-20 19:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:17.060613786 +0000 UTC m=+1.142029320" watchObservedRunningTime="2025-06-20 19:16:17.102885644 +0000 UTC m=+1.184301180" Jun 20 19:16:17.103117 kubelet[3184]: I0620 19:16:17.103034 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-324c5119a7" podStartSLOduration=1.103027437 podStartE2EDuration="1.103027437s" podCreationTimestamp="2025-06-20 19:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:17.100317588 +0000 UTC m=+1.181733126" watchObservedRunningTime="2025-06-20 19:16:17.103027437 +0000 UTC m=+1.184442968" Jun 20 19:16:17.115387 kubelet[3184]: I0620 19:16:17.115338 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-324c5119a7" podStartSLOduration=1.115324049 podStartE2EDuration="1.115324049s" podCreationTimestamp="2025-06-20 19:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:17.115119285 +0000 UTC m=+1.196534825" watchObservedRunningTime="2025-06-20 19:16:17.115324049 +0000 UTC m=+1.196739595" Jun 20 19:16:18.004582 sudo[2158]: pam_unix(sudo:session): session closed for user root Jun 20 19:16:18.097612 sshd[2157]: Connection closed by 10.200.16.10 port 53908 Jun 20 19:16:18.098811 sshd-session[2155]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:18.101573 systemd[1]: sshd@6-10.200.4.8:22-10.200.16.10:53908.service: Deactivated successfully. Jun 20 19:16:18.103658 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:16:18.103874 systemd[1]: session-9.scope: Consumed 4.291s CPU time, 273.5M memory peak. Jun 20 19:16:18.105843 systemd-logind[1696]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:16:18.107232 systemd-logind[1696]: Removed session 9. Jun 20 19:16:20.875519 kubelet[3184]: I0620 19:16:20.875477 3184 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:16:20.875938 containerd[1721]: time="2025-06-20T19:16:20.875898489Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:16:20.876197 kubelet[3184]: I0620 19:16:20.876124 3184 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:16:21.515324 systemd[1]: Created slice kubepods-burstable-pod8e5b77a2_df05_4f69_b053_039d452cf80e.slice - libcontainer container kubepods-burstable-pod8e5b77a2_df05_4f69_b053_039d452cf80e.slice. Jun 20 19:16:21.526509 systemd[1]: Created slice kubepods-besteffort-podb6cae41b_6ca9_46cc_952a_68cc32d8fd02.slice - libcontainer container kubepods-besteffort-podb6cae41b_6ca9_46cc_952a_68cc32d8fd02.slice. Jun 20 19:16:21.544207 kubelet[3184]: I0620 19:16:21.543533 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-host-proc-sys-kernel\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544207 kubelet[3184]: I0620 19:16:21.543575 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b6cae41b-6ca9-46cc-952a-68cc32d8fd02-kube-proxy\") pod \"kube-proxy-tdr6t\" (UID: \"b6cae41b-6ca9-46cc-952a-68cc32d8fd02\") " pod="kube-system/kube-proxy-tdr6t" Jun 20 19:16:21.544207 kubelet[3184]: I0620 19:16:21.543593 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e5b77a2-df05-4f69-b053-039d452cf80e-clustermesh-secrets\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544207 kubelet[3184]: I0620 19:16:21.543609 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llljq\" (UniqueName: \"kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-kube-api-access-llljq\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544207 kubelet[3184]: I0620 19:16:21.543631 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltmgf\" (UniqueName: \"kubernetes.io/projected/b6cae41b-6ca9-46cc-952a-68cc32d8fd02-kube-api-access-ltmgf\") pod \"kube-proxy-tdr6t\" (UID: \"b6cae41b-6ca9-46cc-952a-68cc32d8fd02\") " pod="kube-system/kube-proxy-tdr6t" Jun 20 19:16:21.544470 kubelet[3184]: I0620 19:16:21.543649 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-run\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544470 kubelet[3184]: I0620 19:16:21.543665 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-bpf-maps\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544470 kubelet[3184]: I0620 19:16:21.543682 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-hostproc\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544470 kubelet[3184]: I0620 19:16:21.543697 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-host-proc-sys-net\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544470 kubelet[3184]: I0620 19:16:21.543713 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-hubble-tls\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544470 kubelet[3184]: I0620 19:16:21.543731 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6cae41b-6ca9-46cc-952a-68cc32d8fd02-xtables-lock\") pod \"kube-proxy-tdr6t\" (UID: \"b6cae41b-6ca9-46cc-952a-68cc32d8fd02\") " pod="kube-system/kube-proxy-tdr6t" Jun 20 19:16:21.544626 kubelet[3184]: I0620 19:16:21.543748 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cni-path\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544626 kubelet[3184]: I0620 19:16:21.543763 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-lib-modules\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544626 kubelet[3184]: I0620 19:16:21.543784 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6cae41b-6ca9-46cc-952a-68cc32d8fd02-lib-modules\") pod \"kube-proxy-tdr6t\" (UID: \"b6cae41b-6ca9-46cc-952a-68cc32d8fd02\") " pod="kube-system/kube-proxy-tdr6t" Jun 20 19:16:21.544626 kubelet[3184]: I0620 19:16:21.543804 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-cgroup\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544626 kubelet[3184]: I0620 19:16:21.543822 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-etc-cni-netd\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544626 kubelet[3184]: I0620 19:16:21.543837 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-xtables-lock\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.544847 kubelet[3184]: I0620 19:16:21.543858 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-config-path\") pod \"cilium-j64d8\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " pod="kube-system/cilium-j64d8" Jun 20 19:16:21.656602 kubelet[3184]: E0620 19:16:21.656570 3184 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:16:21.656602 kubelet[3184]: E0620 19:16:21.656601 3184 projected.go:194] Error preparing data for projected volume kube-api-access-llljq for pod kube-system/cilium-j64d8: configmap "kube-root-ca.crt" not found Jun 20 19:16:21.656757 kubelet[3184]: E0620 19:16:21.656666 3184 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-kube-api-access-llljq podName:8e5b77a2-df05-4f69-b053-039d452cf80e nodeName:}" failed. No retries permitted until 2025-06-20 19:16:22.156642509 +0000 UTC m=+6.238058038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-llljq" (UniqueName: "kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-kube-api-access-llljq") pod "cilium-j64d8" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e") : configmap "kube-root-ca.crt" not found Jun 20 19:16:21.662103 kubelet[3184]: E0620 19:16:21.662077 3184 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:16:21.662309 kubelet[3184]: E0620 19:16:21.662229 3184 projected.go:194] Error preparing data for projected volume kube-api-access-ltmgf for pod kube-system/kube-proxy-tdr6t: configmap "kube-root-ca.crt" not found Jun 20 19:16:21.662309 kubelet[3184]: E0620 19:16:21.662289 3184 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6cae41b-6ca9-46cc-952a-68cc32d8fd02-kube-api-access-ltmgf podName:b6cae41b-6ca9-46cc-952a-68cc32d8fd02 nodeName:}" failed. No retries permitted until 2025-06-20 19:16:22.162272004 +0000 UTC m=+6.243687526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltmgf" (UniqueName: "kubernetes.io/projected/b6cae41b-6ca9-46cc-952a-68cc32d8fd02-kube-api-access-ltmgf") pod "kube-proxy-tdr6t" (UID: "b6cae41b-6ca9-46cc-952a-68cc32d8fd02") : configmap "kube-root-ca.crt" not found Jun 20 19:16:21.992339 systemd[1]: Created slice kubepods-besteffort-pod22059cc1_0f28_493a_943b_1880642d788c.slice - libcontainer container kubepods-besteffort-pod22059cc1_0f28_493a_943b_1880642d788c.slice. Jun 20 19:16:22.048752 kubelet[3184]: I0620 19:16:22.048718 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pcgb\" (UniqueName: \"kubernetes.io/projected/22059cc1-0f28-493a-943b-1880642d788c-kube-api-access-8pcgb\") pod \"cilium-operator-6c4d7847fc-pjvxr\" (UID: \"22059cc1-0f28-493a-943b-1880642d788c\") " pod="kube-system/cilium-operator-6c4d7847fc-pjvxr" Jun 20 19:16:22.048752 kubelet[3184]: I0620 19:16:22.048759 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22059cc1-0f28-493a-943b-1880642d788c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pjvxr\" (UID: \"22059cc1-0f28-493a-943b-1880642d788c\") " pod="kube-system/cilium-operator-6c4d7847fc-pjvxr" Jun 20 19:16:22.296165 containerd[1721]: time="2025-06-20T19:16:22.296038005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pjvxr,Uid:22059cc1-0f28-493a-943b-1880642d788c,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:22.338533 containerd[1721]: time="2025-06-20T19:16:22.338332835Z" level=info msg="connecting to shim 792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2" address="unix:///run/containerd/s/d82b216de020466e6741848279c8fadaf4c889ff948e3c5c221eb9c707768b54" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:22.362704 systemd[1]: Started cri-containerd-792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2.scope - libcontainer container 792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2. Jun 20 19:16:22.405167 containerd[1721]: time="2025-06-20T19:16:22.405126274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pjvxr,Uid:22059cc1-0f28-493a-943b-1880642d788c,Namespace:kube-system,Attempt:0,} returns sandbox id \"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\"" Jun 20 19:16:22.407056 containerd[1721]: time="2025-06-20T19:16:22.406827068Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:16:22.422604 containerd[1721]: time="2025-06-20T19:16:22.422562333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j64d8,Uid:8e5b77a2-df05-4f69-b053-039d452cf80e,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:22.435611 containerd[1721]: time="2025-06-20T19:16:22.435579851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tdr6t,Uid:b6cae41b-6ca9-46cc-952a-68cc32d8fd02,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:22.478099 containerd[1721]: time="2025-06-20T19:16:22.477701186Z" level=info msg="connecting to shim 5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570" address="unix:///run/containerd/s/7c2014cb61189d684f042e7f006337af3cbad22a4d7f91578267031e301187e9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:22.499393 containerd[1721]: time="2025-06-20T19:16:22.499331752Z" level=info msg="connecting to shim 3022c783349f6a27bf46b9512daa7f04686aff88bafc593f680564dbf9984ce8" address="unix:///run/containerd/s/84c4ef2fa25a333c85d31ba1afc6103bfa50bde07ba87d1c2e7d4e06ba65d450" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:22.501898 systemd[1]: Started cri-containerd-5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570.scope - libcontainer container 5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570. Jun 20 19:16:22.525657 systemd[1]: Started cri-containerd-3022c783349f6a27bf46b9512daa7f04686aff88bafc593f680564dbf9984ce8.scope - libcontainer container 3022c783349f6a27bf46b9512daa7f04686aff88bafc593f680564dbf9984ce8. Jun 20 19:16:22.533461 containerd[1721]: time="2025-06-20T19:16:22.533419000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j64d8,Uid:8e5b77a2-df05-4f69-b053-039d452cf80e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\"" Jun 20 19:16:22.551833 containerd[1721]: time="2025-06-20T19:16:22.551664888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tdr6t,Uid:b6cae41b-6ca9-46cc-952a-68cc32d8fd02,Namespace:kube-system,Attempt:0,} returns sandbox id \"3022c783349f6a27bf46b9512daa7f04686aff88bafc593f680564dbf9984ce8\"" Jun 20 19:16:22.558662 containerd[1721]: time="2025-06-20T19:16:22.558614688Z" level=info msg="CreateContainer within sandbox \"3022c783349f6a27bf46b9512daa7f04686aff88bafc593f680564dbf9984ce8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:16:22.578740 containerd[1721]: time="2025-06-20T19:16:22.578704604Z" level=info msg="Container f5b3eca312fbe21d2262bfb11881b3908799c9f0712b4d47d7c4f5429665d9f5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:22.593763 containerd[1721]: time="2025-06-20T19:16:22.593728195Z" level=info msg="CreateContainer within sandbox \"3022c783349f6a27bf46b9512daa7f04686aff88bafc593f680564dbf9984ce8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5b3eca312fbe21d2262bfb11881b3908799c9f0712b4d47d7c4f5429665d9f5\"" Jun 20 19:16:22.594528 containerd[1721]: time="2025-06-20T19:16:22.594440330Z" level=info msg="StartContainer for \"f5b3eca312fbe21d2262bfb11881b3908799c9f0712b4d47d7c4f5429665d9f5\"" Jun 20 19:16:22.596359 containerd[1721]: time="2025-06-20T19:16:22.596250004Z" level=info msg="connecting to shim f5b3eca312fbe21d2262bfb11881b3908799c9f0712b4d47d7c4f5429665d9f5" address="unix:///run/containerd/s/84c4ef2fa25a333c85d31ba1afc6103bfa50bde07ba87d1c2e7d4e06ba65d450" protocol=ttrpc version=3 Jun 20 19:16:22.622679 systemd[1]: Started cri-containerd-f5b3eca312fbe21d2262bfb11881b3908799c9f0712b4d47d7c4f5429665d9f5.scope - libcontainer container f5b3eca312fbe21d2262bfb11881b3908799c9f0712b4d47d7c4f5429665d9f5. Jun 20 19:16:22.661638 containerd[1721]: time="2025-06-20T19:16:22.661577716Z" level=info msg="StartContainer for \"f5b3eca312fbe21d2262bfb11881b3908799c9f0712b4d47d7c4f5429665d9f5\" returns successfully" Jun 20 19:16:23.822269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340042585.mount: Deactivated successfully. Jun 20 19:16:24.038697 kubelet[3184]: I0620 19:16:24.038537 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tdr6t" podStartSLOduration=3.038420365 podStartE2EDuration="3.038420365s" podCreationTimestamp="2025-06-20 19:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:23.082834662 +0000 UTC m=+7.164250199" watchObservedRunningTime="2025-06-20 19:16:24.038420365 +0000 UTC m=+8.119835892" Jun 20 19:16:24.403456 containerd[1721]: time="2025-06-20T19:16:24.403402349Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:24.406180 containerd[1721]: time="2025-06-20T19:16:24.406147174Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:16:24.409144 containerd[1721]: time="2025-06-20T19:16:24.409102328Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:24.410195 containerd[1721]: time="2025-06-20T19:16:24.410063616Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.003201718s" Jun 20 19:16:24.410195 containerd[1721]: time="2025-06-20T19:16:24.410099772Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:16:24.411243 containerd[1721]: time="2025-06-20T19:16:24.411216814Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:16:24.416981 containerd[1721]: time="2025-06-20T19:16:24.416940900Z" level=info msg="CreateContainer within sandbox \"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:16:24.435356 containerd[1721]: time="2025-06-20T19:16:24.434691619Z" level=info msg="Container bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:24.445664 containerd[1721]: time="2025-06-20T19:16:24.445634562Z" level=info msg="CreateContainer within sandbox \"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\"" Jun 20 19:16:24.446891 containerd[1721]: time="2025-06-20T19:16:24.446149680Z" level=info msg="StartContainer for \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\"" Jun 20 19:16:24.447127 containerd[1721]: time="2025-06-20T19:16:24.447097330Z" level=info msg="connecting to shim bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97" address="unix:///run/containerd/s/d82b216de020466e6741848279c8fadaf4c889ff948e3c5c221eb9c707768b54" protocol=ttrpc version=3 Jun 20 19:16:24.465672 systemd[1]: Started cri-containerd-bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97.scope - libcontainer container bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97. Jun 20 19:16:24.493603 containerd[1721]: time="2025-06-20T19:16:24.493561033Z" level=info msg="StartContainer for \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" returns successfully" Jun 20 19:16:27.096429 kubelet[3184]: I0620 19:16:27.096297 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pjvxr" podStartSLOduration=4.09167917 podStartE2EDuration="6.096276124s" podCreationTimestamp="2025-06-20 19:16:21 +0000 UTC" firstStartedPulling="2025-06-20 19:16:22.406474135 +0000 UTC m=+6.487889661" lastFinishedPulling="2025-06-20 19:16:24.411071088 +0000 UTC m=+8.492486615" observedRunningTime="2025-06-20 19:16:25.109624726 +0000 UTC m=+9.191040258" watchObservedRunningTime="2025-06-20 19:16:27.096276124 +0000 UTC m=+11.177691689" Jun 20 19:16:29.057483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120497074.mount: Deactivated successfully. Jun 20 19:16:30.628530 containerd[1721]: time="2025-06-20T19:16:30.628472191Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:30.636749 containerd[1721]: time="2025-06-20T19:16:30.635520722Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:16:30.636749 containerd[1721]: time="2025-06-20T19:16:30.635622324Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:16:30.636749 containerd[1721]: time="2025-06-20T19:16:30.636630230Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.225368461s" Jun 20 19:16:30.636749 containerd[1721]: time="2025-06-20T19:16:30.636659055Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:16:30.643426 containerd[1721]: time="2025-06-20T19:16:30.643391462Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:16:30.665861 containerd[1721]: time="2025-06-20T19:16:30.665823894Z" level=info msg="Container 192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:30.681010 containerd[1721]: time="2025-06-20T19:16:30.680976992Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\"" Jun 20 19:16:30.681652 containerd[1721]: time="2025-06-20T19:16:30.681626372Z" level=info msg="StartContainer for \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\"" Jun 20 19:16:30.682742 containerd[1721]: time="2025-06-20T19:16:30.682693481Z" level=info msg="connecting to shim 192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18" address="unix:///run/containerd/s/7c2014cb61189d684f042e7f006337af3cbad22a4d7f91578267031e301187e9" protocol=ttrpc version=3 Jun 20 19:16:30.702643 systemd[1]: Started cri-containerd-192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18.scope - libcontainer container 192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18. Jun 20 19:16:30.735834 containerd[1721]: time="2025-06-20T19:16:30.735796555Z" level=info msg="StartContainer for \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\" returns successfully" Jun 20 19:16:30.737393 systemd[1]: cri-containerd-192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18.scope: Deactivated successfully. Jun 20 19:16:30.741368 containerd[1721]: time="2025-06-20T19:16:30.740861016Z" level=info msg="received exit event container_id:\"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\" id:\"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\" pid:3648 exited_at:{seconds:1750446990 nanos:740436270}" Jun 20 19:16:30.741368 containerd[1721]: time="2025-06-20T19:16:30.740972804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\" id:\"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\" pid:3648 exited_at:{seconds:1750446990 nanos:740436270}" Jun 20 19:16:30.757944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18-rootfs.mount: Deactivated successfully. Jun 20 19:16:35.111523 containerd[1721]: time="2025-06-20T19:16:35.110740217Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:16:35.130521 containerd[1721]: time="2025-06-20T19:16:35.129967853Z" level=info msg="Container 57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:35.146792 containerd[1721]: time="2025-06-20T19:16:35.146752546Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\"" Jun 20 19:16:35.147476 containerd[1721]: time="2025-06-20T19:16:35.147243105Z" level=info msg="StartContainer for \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\"" Jun 20 19:16:35.148221 containerd[1721]: time="2025-06-20T19:16:35.148183776Z" level=info msg="connecting to shim 57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6" address="unix:///run/containerd/s/7c2014cb61189d684f042e7f006337af3cbad22a4d7f91578267031e301187e9" protocol=ttrpc version=3 Jun 20 19:16:35.169687 systemd[1]: Started cri-containerd-57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6.scope - libcontainer container 57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6. Jun 20 19:16:35.200078 containerd[1721]: time="2025-06-20T19:16:35.200021277Z" level=info msg="StartContainer for \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\" returns successfully" Jun 20 19:16:35.212341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:16:35.212887 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:16:35.215072 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:16:35.216419 containerd[1721]: time="2025-06-20T19:16:35.216383197Z" level=info msg="received exit event container_id:\"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\" id:\"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\" pid:3693 exited_at:{seconds:1750446995 nanos:215843307}" Jun 20 19:16:35.216685 containerd[1721]: time="2025-06-20T19:16:35.216409692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\" id:\"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\" pid:3693 exited_at:{seconds:1750446995 nanos:215843307}" Jun 20 19:16:35.217180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:16:35.221125 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:16:35.221966 systemd[1]: cri-containerd-57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6.scope: Deactivated successfully. Jun 20 19:16:35.241056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6-rootfs.mount: Deactivated successfully. Jun 20 19:16:35.247801 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:16:36.112772 containerd[1721]: time="2025-06-20T19:16:36.112724624Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:16:36.131518 containerd[1721]: time="2025-06-20T19:16:36.131432309Z" level=info msg="Container d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:36.156759 containerd[1721]: time="2025-06-20T19:16:36.156704333Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\"" Jun 20 19:16:36.157840 containerd[1721]: time="2025-06-20T19:16:36.157147090Z" level=info msg="StartContainer for \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\"" Jun 20 19:16:36.159267 containerd[1721]: time="2025-06-20T19:16:36.158737986Z" level=info msg="connecting to shim d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b" address="unix:///run/containerd/s/7c2014cb61189d684f042e7f006337af3cbad22a4d7f91578267031e301187e9" protocol=ttrpc version=3 Jun 20 19:16:36.180657 systemd[1]: Started cri-containerd-d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b.scope - libcontainer container d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b. Jun 20 19:16:36.208747 systemd[1]: cri-containerd-d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b.scope: Deactivated successfully. Jun 20 19:16:36.210314 containerd[1721]: time="2025-06-20T19:16:36.210278882Z" level=info msg="received exit event container_id:\"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\" id:\"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\" pid:3742 exited_at:{seconds:1750446996 nanos:210104797}" Jun 20 19:16:36.211557 containerd[1721]: time="2025-06-20T19:16:36.210488923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\" id:\"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\" pid:3742 exited_at:{seconds:1750446996 nanos:210104797}" Jun 20 19:16:36.213170 containerd[1721]: time="2025-06-20T19:16:36.213135167Z" level=info msg="StartContainer for \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\" returns successfully" Jun 20 19:16:36.231045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b-rootfs.mount: Deactivated successfully. Jun 20 19:16:37.117307 containerd[1721]: time="2025-06-20T19:16:37.117202982Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:16:37.145650 containerd[1721]: time="2025-06-20T19:16:37.145605663Z" level=info msg="Container 2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:37.158117 containerd[1721]: time="2025-06-20T19:16:37.158077177Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\"" Jun 20 19:16:37.159389 containerd[1721]: time="2025-06-20T19:16:37.158577939Z" level=info msg="StartContainer for \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\"" Jun 20 19:16:37.159389 containerd[1721]: time="2025-06-20T19:16:37.159311448Z" level=info msg="connecting to shim 2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368" address="unix:///run/containerd/s/7c2014cb61189d684f042e7f006337af3cbad22a4d7f91578267031e301187e9" protocol=ttrpc version=3 Jun 20 19:16:37.180632 systemd[1]: Started cri-containerd-2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368.scope - libcontainer container 2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368. Jun 20 19:16:37.203234 systemd[1]: cri-containerd-2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368.scope: Deactivated successfully. Jun 20 19:16:37.205147 containerd[1721]: time="2025-06-20T19:16:37.205088336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\" id:\"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\" pid:3781 exited_at:{seconds:1750446997 nanos:204851031}" Jun 20 19:16:37.207447 containerd[1721]: time="2025-06-20T19:16:37.207199638Z" level=info msg="received exit event container_id:\"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\" id:\"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\" pid:3781 exited_at:{seconds:1750446997 nanos:204851031}" Jun 20 19:16:37.214254 containerd[1721]: time="2025-06-20T19:16:37.214220556Z" level=info msg="StartContainer for \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\" returns successfully" Jun 20 19:16:37.225253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368-rootfs.mount: Deactivated successfully. Jun 20 19:16:38.125860 containerd[1721]: time="2025-06-20T19:16:38.125795584Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:16:38.146997 containerd[1721]: time="2025-06-20T19:16:38.145059555Z" level=info msg="Container cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:38.164155 containerd[1721]: time="2025-06-20T19:16:38.164117776Z" level=info msg="CreateContainer within sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\"" Jun 20 19:16:38.164623 containerd[1721]: time="2025-06-20T19:16:38.164601502Z" level=info msg="StartContainer for \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\"" Jun 20 19:16:38.165745 containerd[1721]: time="2025-06-20T19:16:38.165634453Z" level=info msg="connecting to shim cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335" address="unix:///run/containerd/s/7c2014cb61189d684f042e7f006337af3cbad22a4d7f91578267031e301187e9" protocol=ttrpc version=3 Jun 20 19:16:38.187657 systemd[1]: Started cri-containerd-cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335.scope - libcontainer container cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335. Jun 20 19:16:38.220952 containerd[1721]: time="2025-06-20T19:16:38.220867509Z" level=info msg="StartContainer for \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" returns successfully" Jun 20 19:16:38.283171 containerd[1721]: time="2025-06-20T19:16:38.283125700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" id:\"692dead32071e75322514ae0d8fc398c37e51bf6bb3f78c96e2e2cad75962e00\" pid:3850 exited_at:{seconds:1750446998 nanos:282777750}" Jun 20 19:16:38.319204 kubelet[3184]: I0620 19:16:38.317102 3184 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:16:38.373397 systemd[1]: Created slice kubepods-burstable-pod157b71d4_af83_4dd3_a382_161af63ce3da.slice - libcontainer container kubepods-burstable-pod157b71d4_af83_4dd3_a382_161af63ce3da.slice. Jun 20 19:16:38.381819 systemd[1]: Created slice kubepods-burstable-pod9cf4ba28_3689_4035_9443_aaf4ee69b385.slice - libcontainer container kubepods-burstable-pod9cf4ba28_3689_4035_9443_aaf4ee69b385.slice. Jun 20 19:16:38.457220 kubelet[3184]: I0620 19:16:38.457173 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqrpz\" (UniqueName: \"kubernetes.io/projected/9cf4ba28-3689-4035-9443-aaf4ee69b385-kube-api-access-wqrpz\") pod \"coredns-674b8bbfcf-26vf8\" (UID: \"9cf4ba28-3689-4035-9443-aaf4ee69b385\") " pod="kube-system/coredns-674b8bbfcf-26vf8" Jun 20 19:16:38.457220 kubelet[3184]: I0620 19:16:38.457240 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccb8n\" (UniqueName: \"kubernetes.io/projected/157b71d4-af83-4dd3-a382-161af63ce3da-kube-api-access-ccb8n\") pod \"coredns-674b8bbfcf-dhjvc\" (UID: \"157b71d4-af83-4dd3-a382-161af63ce3da\") " pod="kube-system/coredns-674b8bbfcf-dhjvc" Jun 20 19:16:38.457467 kubelet[3184]: I0620 19:16:38.457266 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/157b71d4-af83-4dd3-a382-161af63ce3da-config-volume\") pod \"coredns-674b8bbfcf-dhjvc\" (UID: \"157b71d4-af83-4dd3-a382-161af63ce3da\") " pod="kube-system/coredns-674b8bbfcf-dhjvc" Jun 20 19:16:38.457467 kubelet[3184]: I0620 19:16:38.457288 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cf4ba28-3689-4035-9443-aaf4ee69b385-config-volume\") pod \"coredns-674b8bbfcf-26vf8\" (UID: \"9cf4ba28-3689-4035-9443-aaf4ee69b385\") " pod="kube-system/coredns-674b8bbfcf-26vf8" Jun 20 19:16:38.679164 containerd[1721]: time="2025-06-20T19:16:38.678350207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhjvc,Uid:157b71d4-af83-4dd3-a382-161af63ce3da,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:38.686565 containerd[1721]: time="2025-06-20T19:16:38.686029963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26vf8,Uid:9cf4ba28-3689-4035-9443-aaf4ee69b385,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:39.137146 kubelet[3184]: I0620 19:16:39.136634 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j64d8" podStartSLOduration=10.033614253 podStartE2EDuration="18.136615287s" podCreationTimestamp="2025-06-20 19:16:21 +0000 UTC" firstStartedPulling="2025-06-20 19:16:22.534772786 +0000 UTC m=+6.616188311" lastFinishedPulling="2025-06-20 19:16:30.637773817 +0000 UTC m=+14.719189345" observedRunningTime="2025-06-20 19:16:39.136365525 +0000 UTC m=+23.217781059" watchObservedRunningTime="2025-06-20 19:16:39.136615287 +0000 UTC m=+23.218030812" Jun 20 19:16:40.382640 systemd-networkd[1355]: cilium_host: Link UP Jun 20 19:16:40.382789 systemd-networkd[1355]: cilium_net: Link UP Jun 20 19:16:40.382900 systemd-networkd[1355]: cilium_net: Gained carrier Jun 20 19:16:40.382995 systemd-networkd[1355]: cilium_host: Gained carrier Jun 20 19:16:40.450652 systemd-networkd[1355]: cilium_host: Gained IPv6LL Jun 20 19:16:40.525345 systemd-networkd[1355]: cilium_vxlan: Link UP Jun 20 19:16:40.525353 systemd-networkd[1355]: cilium_vxlan: Gained carrier Jun 20 19:16:40.666626 systemd-networkd[1355]: cilium_net: Gained IPv6LL Jun 20 19:16:40.756570 kernel: NET: Registered PF_ALG protocol family Jun 20 19:16:41.265295 systemd-networkd[1355]: lxc_health: Link UP Jun 20 19:16:41.277003 systemd-networkd[1355]: lxc_health: Gained carrier Jun 20 19:16:41.734892 systemd-networkd[1355]: lxcbdee731ec00b: Link UP Jun 20 19:16:41.742163 kernel: eth0: renamed from tmpacf55 Jun 20 19:16:41.742391 systemd-networkd[1355]: lxcbdee731ec00b: Gained carrier Jun 20 19:16:41.751526 systemd-networkd[1355]: lxc9ca77e644e13: Link UP Jun 20 19:16:41.758532 kernel: eth0: renamed from tmp55bcd Jun 20 19:16:41.765928 systemd-networkd[1355]: lxc9ca77e644e13: Gained carrier Jun 20 19:16:42.242729 systemd-networkd[1355]: cilium_vxlan: Gained IPv6LL Jun 20 19:16:42.562694 systemd-networkd[1355]: lxc_health: Gained IPv6LL Jun 20 19:16:43.010777 systemd-networkd[1355]: lxc9ca77e644e13: Gained IPv6LL Jun 20 19:16:43.714759 systemd-networkd[1355]: lxcbdee731ec00b: Gained IPv6LL Jun 20 19:16:44.839533 containerd[1721]: time="2025-06-20T19:16:44.837666202Z" level=info msg="connecting to shim acf55f0ff157cf6db1c42301b5f44304c6a4f8a5d118b355eae7fc60148969ad" address="unix:///run/containerd/s/f20a09b22274fcce3471167d22bc266725d716aeb041cdd20923cfef3d19951c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:44.857993 containerd[1721]: time="2025-06-20T19:16:44.857949520Z" level=info msg="connecting to shim 55bcda8979e750cce870b27f48c638e43eb783bf16f089a97ce820fcb1375aba" address="unix:///run/containerd/s/9f396915fc7220d7aaaf4044b689985b76b442c5e1228c5a4bf73685c0e9d0c7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:16:44.884639 systemd[1]: Started cri-containerd-acf55f0ff157cf6db1c42301b5f44304c6a4f8a5d118b355eae7fc60148969ad.scope - libcontainer container acf55f0ff157cf6db1c42301b5f44304c6a4f8a5d118b355eae7fc60148969ad. Jun 20 19:16:44.888644 systemd[1]: Started cri-containerd-55bcda8979e750cce870b27f48c638e43eb783bf16f089a97ce820fcb1375aba.scope - libcontainer container 55bcda8979e750cce870b27f48c638e43eb783bf16f089a97ce820fcb1375aba. Jun 20 19:16:44.942712 containerd[1721]: time="2025-06-20T19:16:44.942669083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhjvc,Uid:157b71d4-af83-4dd3-a382-161af63ce3da,Namespace:kube-system,Attempt:0,} returns sandbox id \"acf55f0ff157cf6db1c42301b5f44304c6a4f8a5d118b355eae7fc60148969ad\"" Jun 20 19:16:44.948770 containerd[1721]: time="2025-06-20T19:16:44.948716613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26vf8,Uid:9cf4ba28-3689-4035-9443-aaf4ee69b385,Namespace:kube-system,Attempt:0,} returns sandbox id \"55bcda8979e750cce870b27f48c638e43eb783bf16f089a97ce820fcb1375aba\"" Jun 20 19:16:44.951008 containerd[1721]: time="2025-06-20T19:16:44.950971570Z" level=info msg="CreateContainer within sandbox \"acf55f0ff157cf6db1c42301b5f44304c6a4f8a5d118b355eae7fc60148969ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:16:44.956280 containerd[1721]: time="2025-06-20T19:16:44.956220506Z" level=info msg="CreateContainer within sandbox \"55bcda8979e750cce870b27f48c638e43eb783bf16f089a97ce820fcb1375aba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:16:44.980982 containerd[1721]: time="2025-06-20T19:16:44.980900271Z" level=info msg="Container 180dbc95d3bb19f56397f45d5261e511a48af8bfe1f36e129d0338f8a098c537: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:44.983439 containerd[1721]: time="2025-06-20T19:16:44.983411349Z" level=info msg="Container ccd79d200460f1bda1f639902f661f962876f6e0f808cd5837801ad3a0417ba2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:16:44.998248 containerd[1721]: time="2025-06-20T19:16:44.998209587Z" level=info msg="CreateContainer within sandbox \"acf55f0ff157cf6db1c42301b5f44304c6a4f8a5d118b355eae7fc60148969ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"180dbc95d3bb19f56397f45d5261e511a48af8bfe1f36e129d0338f8a098c537\"" Jun 20 19:16:44.998869 containerd[1721]: time="2025-06-20T19:16:44.998745430Z" level=info msg="StartContainer for \"180dbc95d3bb19f56397f45d5261e511a48af8bfe1f36e129d0338f8a098c537\"" Jun 20 19:16:45.000729 containerd[1721]: time="2025-06-20T19:16:45.000693682Z" level=info msg="connecting to shim 180dbc95d3bb19f56397f45d5261e511a48af8bfe1f36e129d0338f8a098c537" address="unix:///run/containerd/s/f20a09b22274fcce3471167d22bc266725d716aeb041cdd20923cfef3d19951c" protocol=ttrpc version=3 Jun 20 19:16:45.002513 containerd[1721]: time="2025-06-20T19:16:45.002465178Z" level=info msg="CreateContainer within sandbox \"55bcda8979e750cce870b27f48c638e43eb783bf16f089a97ce820fcb1375aba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ccd79d200460f1bda1f639902f661f962876f6e0f808cd5837801ad3a0417ba2\"" Jun 20 19:16:45.003348 containerd[1721]: time="2025-06-20T19:16:45.003239120Z" level=info msg="StartContainer for \"ccd79d200460f1bda1f639902f661f962876f6e0f808cd5837801ad3a0417ba2\"" Jun 20 19:16:45.006470 containerd[1721]: time="2025-06-20T19:16:45.006417955Z" level=info msg="connecting to shim ccd79d200460f1bda1f639902f661f962876f6e0f808cd5837801ad3a0417ba2" address="unix:///run/containerd/s/9f396915fc7220d7aaaf4044b689985b76b442c5e1228c5a4bf73685c0e9d0c7" protocol=ttrpc version=3 Jun 20 19:16:45.025685 systemd[1]: Started cri-containerd-180dbc95d3bb19f56397f45d5261e511a48af8bfe1f36e129d0338f8a098c537.scope - libcontainer container 180dbc95d3bb19f56397f45d5261e511a48af8bfe1f36e129d0338f8a098c537. Jun 20 19:16:45.033904 systemd[1]: Started cri-containerd-ccd79d200460f1bda1f639902f661f962876f6e0f808cd5837801ad3a0417ba2.scope - libcontainer container ccd79d200460f1bda1f639902f661f962876f6e0f808cd5837801ad3a0417ba2. Jun 20 19:16:45.076670 containerd[1721]: time="2025-06-20T19:16:45.076559718Z" level=info msg="StartContainer for \"180dbc95d3bb19f56397f45d5261e511a48af8bfe1f36e129d0338f8a098c537\" returns successfully" Jun 20 19:16:45.077505 containerd[1721]: time="2025-06-20T19:16:45.077454414Z" level=info msg="StartContainer for \"ccd79d200460f1bda1f639902f661f962876f6e0f808cd5837801ad3a0417ba2\" returns successfully" Jun 20 19:16:45.148607 kubelet[3184]: I0620 19:16:45.148218 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-26vf8" podStartSLOduration=24.148201425 podStartE2EDuration="24.148201425s" podCreationTimestamp="2025-06-20 19:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:45.147805535 +0000 UTC m=+29.229221067" watchObservedRunningTime="2025-06-20 19:16:45.148201425 +0000 UTC m=+29.229616963" Jun 20 19:16:45.164786 kubelet[3184]: I0620 19:16:45.164705 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dhjvc" podStartSLOduration=24.164686714 podStartE2EDuration="24.164686714s" podCreationTimestamp="2025-06-20 19:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:45.163730379 +0000 UTC m=+29.245145913" watchObservedRunningTime="2025-06-20 19:16:45.164686714 +0000 UTC m=+29.246102245" Jun 20 19:16:45.817958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311855119.mount: Deactivated successfully. Jun 20 19:17:53.738197 systemd[1]: Started sshd@7-10.200.4.8:22-10.200.16.10:41466.service - OpenSSH per-connection server daemon (10.200.16.10:41466). Jun 20 19:17:54.330764 sshd[4507]: Accepted publickey for core from 10.200.16.10 port 41466 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:54.332012 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:54.336563 systemd-logind[1696]: New session 10 of user core. Jun 20 19:17:54.342673 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:17:54.816397 sshd[4509]: Connection closed by 10.200.16.10 port 41466 Jun 20 19:17:54.817234 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:54.820583 systemd[1]: sshd@7-10.200.4.8:22-10.200.16.10:41466.service: Deactivated successfully. Jun 20 19:17:54.822386 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:17:54.823335 systemd-logind[1696]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:17:54.824604 systemd-logind[1696]: Removed session 10. Jun 20 19:17:59.940290 systemd[1]: Started sshd@8-10.200.4.8:22-10.200.16.10:52062.service - OpenSSH per-connection server daemon (10.200.16.10:52062). Jun 20 19:18:00.539738 sshd[4522]: Accepted publickey for core from 10.200.16.10 port 52062 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:00.541148 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:00.545981 systemd-logind[1696]: New session 11 of user core. Jun 20 19:18:00.552662 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:18:01.013214 sshd[4524]: Connection closed by 10.200.16.10 port 52062 Jun 20 19:18:01.013849 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:01.017928 systemd[1]: sshd@8-10.200.4.8:22-10.200.16.10:52062.service: Deactivated successfully. Jun 20 19:18:01.019842 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:18:01.020752 systemd-logind[1696]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:18:01.022017 systemd-logind[1696]: Removed session 11. Jun 20 19:18:06.126945 systemd[1]: Started sshd@9-10.200.4.8:22-10.200.16.10:52078.service - OpenSSH per-connection server daemon (10.200.16.10:52078). Jun 20 19:18:06.724282 sshd[4537]: Accepted publickey for core from 10.200.16.10 port 52078 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:06.725720 sshd-session[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:06.730574 systemd-logind[1696]: New session 12 of user core. Jun 20 19:18:06.736676 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:18:07.193659 sshd[4539]: Connection closed by 10.200.16.10 port 52078 Jun 20 19:18:07.194299 sshd-session[4537]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:07.197827 systemd[1]: sshd@9-10.200.4.8:22-10.200.16.10:52078.service: Deactivated successfully. Jun 20 19:18:07.199874 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:18:07.200637 systemd-logind[1696]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:18:07.201817 systemd-logind[1696]: Removed session 12. Jun 20 19:18:12.302778 systemd[1]: Started sshd@10-10.200.4.8:22-10.200.16.10:47456.service - OpenSSH per-connection server daemon (10.200.16.10:47456). Jun 20 19:18:12.902851 sshd[4552]: Accepted publickey for core from 10.200.16.10 port 47456 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:12.904280 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:12.908576 systemd-logind[1696]: New session 13 of user core. Jun 20 19:18:12.912713 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:18:13.368960 sshd[4554]: Connection closed by 10.200.16.10 port 47456 Jun 20 19:18:13.369569 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:13.372593 systemd[1]: sshd@10-10.200.4.8:22-10.200.16.10:47456.service: Deactivated successfully. Jun 20 19:18:13.374611 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:18:13.376045 systemd-logind[1696]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:18:13.377283 systemd-logind[1696]: Removed session 13. Jun 20 19:18:13.476910 systemd[1]: Started sshd@11-10.200.4.8:22-10.200.16.10:47466.service - OpenSSH per-connection server daemon (10.200.16.10:47466). Jun 20 19:18:14.080227 sshd[4567]: Accepted publickey for core from 10.200.16.10 port 47466 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:14.081343 sshd-session[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:14.085566 systemd-logind[1696]: New session 14 of user core. Jun 20 19:18:14.090689 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:18:14.581022 sshd[4569]: Connection closed by 10.200.16.10 port 47466 Jun 20 19:18:14.581659 sshd-session[4567]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:14.584408 systemd[1]: sshd@11-10.200.4.8:22-10.200.16.10:47466.service: Deactivated successfully. Jun 20 19:18:14.586302 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:18:14.588413 systemd-logind[1696]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:18:14.589391 systemd-logind[1696]: Removed session 14. Jun 20 19:18:14.689161 systemd[1]: Started sshd@12-10.200.4.8:22-10.200.16.10:47476.service - OpenSSH per-connection server daemon (10.200.16.10:47476). Jun 20 19:18:15.286654 sshd[4579]: Accepted publickey for core from 10.200.16.10 port 47476 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:15.287951 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:15.292922 systemd-logind[1696]: New session 15 of user core. Jun 20 19:18:15.299629 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:18:15.756031 sshd[4581]: Connection closed by 10.200.16.10 port 47476 Jun 20 19:18:15.756679 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:15.760164 systemd[1]: sshd@12-10.200.4.8:22-10.200.16.10:47476.service: Deactivated successfully. Jun 20 19:18:15.761982 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:18:15.762786 systemd-logind[1696]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:18:15.764056 systemd-logind[1696]: Removed session 15. Jun 20 19:18:20.866324 systemd[1]: Started sshd@13-10.200.4.8:22-10.200.16.10:33866.service - OpenSSH per-connection server daemon (10.200.16.10:33866). Jun 20 19:18:21.460952 sshd[4595]: Accepted publickey for core from 10.200.16.10 port 33866 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:21.462386 sshd-session[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:21.467399 systemd-logind[1696]: New session 16 of user core. Jun 20 19:18:21.474663 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:18:21.925568 sshd[4597]: Connection closed by 10.200.16.10 port 33866 Jun 20 19:18:21.926172 sshd-session[4595]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:21.929055 systemd[1]: sshd@13-10.200.4.8:22-10.200.16.10:33866.service: Deactivated successfully. Jun 20 19:18:21.930784 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:18:21.932156 systemd-logind[1696]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:18:21.933578 systemd-logind[1696]: Removed session 16. Jun 20 19:18:22.030683 systemd[1]: Started sshd@14-10.200.4.8:22-10.200.16.10:33874.service - OpenSSH per-connection server daemon (10.200.16.10:33874). Jun 20 19:18:22.626986 sshd[4609]: Accepted publickey for core from 10.200.16.10 port 33874 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:22.628257 sshd-session[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:22.632559 systemd-logind[1696]: New session 17 of user core. Jun 20 19:18:22.638673 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:18:23.123954 sshd[4611]: Connection closed by 10.200.16.10 port 33874 Jun 20 19:18:23.124559 sshd-session[4609]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:23.127323 systemd[1]: sshd@14-10.200.4.8:22-10.200.16.10:33874.service: Deactivated successfully. Jun 20 19:18:23.129107 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:18:23.131368 systemd-logind[1696]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:18:23.132374 systemd-logind[1696]: Removed session 17. Jun 20 19:18:23.237919 systemd[1]: Started sshd@15-10.200.4.8:22-10.200.16.10:33884.service - OpenSSH per-connection server daemon (10.200.16.10:33884). Jun 20 19:18:23.844901 sshd[4623]: Accepted publickey for core from 10.200.16.10 port 33884 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:23.846172 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:23.850738 systemd-logind[1696]: New session 18 of user core. Jun 20 19:18:23.858648 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:18:25.157731 sshd[4625]: Connection closed by 10.200.16.10 port 33884 Jun 20 19:18:25.158403 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:25.161245 systemd[1]: sshd@15-10.200.4.8:22-10.200.16.10:33884.service: Deactivated successfully. Jun 20 19:18:25.163231 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:18:25.164825 systemd-logind[1696]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:18:25.166560 systemd-logind[1696]: Removed session 18. Jun 20 19:18:25.264222 systemd[1]: Started sshd@16-10.200.4.8:22-10.200.16.10:33894.service - OpenSSH per-connection server daemon (10.200.16.10:33894). Jun 20 19:18:25.862585 sshd[4642]: Accepted publickey for core from 10.200.16.10 port 33894 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:25.864035 sshd-session[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:25.868781 systemd-logind[1696]: New session 19 of user core. Jun 20 19:18:25.879671 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:18:26.424254 sshd[4644]: Connection closed by 10.200.16.10 port 33894 Jun 20 19:18:26.424952 sshd-session[4642]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:26.427758 systemd[1]: sshd@16-10.200.4.8:22-10.200.16.10:33894.service: Deactivated successfully. Jun 20 19:18:26.429561 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:18:26.431566 systemd-logind[1696]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:18:26.434624 systemd-logind[1696]: Removed session 19. Jun 20 19:18:26.529151 systemd[1]: Started sshd@17-10.200.4.8:22-10.200.16.10:33910.service - OpenSSH per-connection server daemon (10.200.16.10:33910). Jun 20 19:18:27.120855 sshd[4654]: Accepted publickey for core from 10.200.16.10 port 33910 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:27.122119 sshd-session[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:27.126680 systemd-logind[1696]: New session 20 of user core. Jun 20 19:18:27.130691 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:18:27.595901 sshd[4656]: Connection closed by 10.200.16.10 port 33910 Jun 20 19:18:27.596475 sshd-session[4654]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:27.599819 systemd[1]: sshd@17-10.200.4.8:22-10.200.16.10:33910.service: Deactivated successfully. Jun 20 19:18:27.601579 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:18:27.602357 systemd-logind[1696]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:18:27.603936 systemd-logind[1696]: Removed session 20. Jun 20 19:18:32.707441 systemd[1]: Started sshd@18-10.200.4.8:22-10.200.16.10:32800.service - OpenSSH per-connection server daemon (10.200.16.10:32800). Jun 20 19:18:33.303185 sshd[4670]: Accepted publickey for core from 10.200.16.10 port 32800 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:33.304654 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:33.309608 systemd-logind[1696]: New session 21 of user core. Jun 20 19:18:33.314661 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:18:33.769719 sshd[4672]: Connection closed by 10.200.16.10 port 32800 Jun 20 19:18:33.770328 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:33.772993 systemd[1]: sshd@18-10.200.4.8:22-10.200.16.10:32800.service: Deactivated successfully. Jun 20 19:18:33.774771 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:18:33.776102 systemd-logind[1696]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:18:33.777343 systemd-logind[1696]: Removed session 21. Jun 20 19:18:38.884978 systemd[1]: Started sshd@19-10.200.4.8:22-10.200.16.10:39454.service - OpenSSH per-connection server daemon (10.200.16.10:39454). Jun 20 19:18:39.480076 sshd[4684]: Accepted publickey for core from 10.200.16.10 port 39454 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:39.481627 sshd-session[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:39.486345 systemd-logind[1696]: New session 22 of user core. Jun 20 19:18:39.491660 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:18:39.950586 sshd[4686]: Connection closed by 10.200.16.10 port 39454 Jun 20 19:18:39.951207 sshd-session[4684]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:39.954064 systemd[1]: sshd@19-10.200.4.8:22-10.200.16.10:39454.service: Deactivated successfully. Jun 20 19:18:39.955836 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:18:39.957244 systemd-logind[1696]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:18:39.958778 systemd-logind[1696]: Removed session 22. Jun 20 19:18:40.055937 systemd[1]: Started sshd@20-10.200.4.8:22-10.200.16.10:39468.service - OpenSSH per-connection server daemon (10.200.16.10:39468). Jun 20 19:18:40.661256 sshd[4698]: Accepted publickey for core from 10.200.16.10 port 39468 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:40.662704 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:40.667663 systemd-logind[1696]: New session 23 of user core. Jun 20 19:18:40.671678 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:18:42.495755 containerd[1721]: time="2025-06-20T19:18:42.495675678Z" level=info msg="StopContainer for \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" with timeout 30 (s)" Jun 20 19:18:42.496911 containerd[1721]: time="2025-06-20T19:18:42.496869197Z" level=info msg="Stop container \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" with signal terminated" Jun 20 19:18:42.510315 systemd[1]: cri-containerd-bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97.scope: Deactivated successfully. Jun 20 19:18:42.513257 containerd[1721]: time="2025-06-20T19:18:42.513222561Z" level=info msg="received exit event container_id:\"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" id:\"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" pid:3586 exited_at:{seconds:1750447122 nanos:512603303}" Jun 20 19:18:42.513486 containerd[1721]: time="2025-06-20T19:18:42.513367199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" id:\"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" pid:3586 exited_at:{seconds:1750447122 nanos:512603303}" Jun 20 19:18:42.514257 containerd[1721]: time="2025-06-20T19:18:42.514226559Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:18:42.522100 containerd[1721]: time="2025-06-20T19:18:42.522054172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" id:\"2db1d7c705d77e90a5ab369ffbaf649f8d5786582ee4477f70448d9abb21cb14\" pid:4721 exited_at:{seconds:1750447122 nanos:519697524}" Jun 20 19:18:42.524095 containerd[1721]: time="2025-06-20T19:18:42.524025180Z" level=info msg="StopContainer for \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" with timeout 2 (s)" Jun 20 19:18:42.524756 containerd[1721]: time="2025-06-20T19:18:42.524730477Z" level=info msg="Stop container \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" with signal terminated" Jun 20 19:18:42.533417 systemd-networkd[1355]: lxc_health: Link DOWN Jun 20 19:18:42.533426 systemd-networkd[1355]: lxc_health: Lost carrier Jun 20 19:18:42.545640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97-rootfs.mount: Deactivated successfully. Jun 20 19:18:42.549033 systemd[1]: cri-containerd-cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335.scope: Deactivated successfully. Jun 20 19:18:42.549620 systemd[1]: cri-containerd-cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335.scope: Consumed 5.599s CPU time, 123.9M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 19:18:42.550421 containerd[1721]: time="2025-06-20T19:18:42.550392801Z" level=info msg="received exit event container_id:\"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" id:\"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" pid:3821 exited_at:{seconds:1750447122 nanos:550183825}" Jun 20 19:18:42.550615 containerd[1721]: time="2025-06-20T19:18:42.550415333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" id:\"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" pid:3821 exited_at:{seconds:1750447122 nanos:550183825}" Jun 20 19:18:42.567934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335-rootfs.mount: Deactivated successfully. Jun 20 19:18:42.639824 containerd[1721]: time="2025-06-20T19:18:42.639785858Z" level=info msg="StopContainer for \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" returns successfully" Jun 20 19:18:42.641072 containerd[1721]: time="2025-06-20T19:18:42.640980080Z" level=info msg="StopPodSandbox for \"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\"" Jun 20 19:18:42.641254 containerd[1721]: time="2025-06-20T19:18:42.641053579Z" level=info msg="Container to stop \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:42.641820 containerd[1721]: time="2025-06-20T19:18:42.641780612Z" level=info msg="StopContainer for \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" returns successfully" Jun 20 19:18:42.642419 containerd[1721]: time="2025-06-20T19:18:42.642392079Z" level=info msg="StopPodSandbox for \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\"" Jun 20 19:18:42.642745 containerd[1721]: time="2025-06-20T19:18:42.642677441Z" level=info msg="Container to stop \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:42.642745 containerd[1721]: time="2025-06-20T19:18:42.642697377Z" level=info msg="Container to stop \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:42.642745 containerd[1721]: time="2025-06-20T19:18:42.642707221Z" level=info msg="Container to stop \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:42.642745 containerd[1721]: time="2025-06-20T19:18:42.642715516Z" level=info msg="Container to stop \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:42.642745 containerd[1721]: time="2025-06-20T19:18:42.642723463Z" level=info msg="Container to stop \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:18:42.649931 systemd[1]: cri-containerd-5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570.scope: Deactivated successfully. Jun 20 19:18:42.651723 systemd[1]: cri-containerd-792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2.scope: Deactivated successfully. Jun 20 19:18:42.653345 containerd[1721]: time="2025-06-20T19:18:42.652369993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\" id:\"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\" pid:3296 exit_status:137 exited_at:{seconds:1750447122 nanos:651423865}" Jun 20 19:18:42.677886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570-rootfs.mount: Deactivated successfully. Jun 20 19:18:42.682733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2-rootfs.mount: Deactivated successfully. Jun 20 19:18:42.699024 containerd[1721]: time="2025-06-20T19:18:42.698829596Z" level=info msg="shim disconnected" id=792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2 namespace=k8s.io Jun 20 19:18:42.699024 containerd[1721]: time="2025-06-20T19:18:42.698865445Z" level=warning msg="cleaning up after shim disconnected" id=792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2 namespace=k8s.io Jun 20 19:18:42.699024 containerd[1721]: time="2025-06-20T19:18:42.698873870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:18:42.701308 containerd[1721]: time="2025-06-20T19:18:42.700609914Z" level=info msg="shim disconnected" id=5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570 namespace=k8s.io Jun 20 19:18:42.701308 containerd[1721]: time="2025-06-20T19:18:42.700644871Z" level=warning msg="cleaning up after shim disconnected" id=5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570 namespace=k8s.io Jun 20 19:18:42.701308 containerd[1721]: time="2025-06-20T19:18:42.700653143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:18:42.713070 containerd[1721]: time="2025-06-20T19:18:42.713038188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" id:\"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" pid:3360 exit_status:137 exited_at:{seconds:1750447122 nanos:654427563}" Jun 20 19:18:42.713314 containerd[1721]: time="2025-06-20T19:18:42.713300688Z" level=info msg="received exit event sandbox_id:\"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" exit_status:137 exited_at:{seconds:1750447122 nanos:654427563}" Jun 20 19:18:42.715065 containerd[1721]: time="2025-06-20T19:18:42.715037630Z" level=info msg="received exit event sandbox_id:\"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\" exit_status:137 exited_at:{seconds:1750447122 nanos:651423865}" Jun 20 19:18:42.715586 containerd[1721]: time="2025-06-20T19:18:42.715561830Z" level=info msg="TearDown network for sandbox \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" successfully" Jun 20 19:18:42.715659 containerd[1721]: time="2025-06-20T19:18:42.715649877Z" level=info msg="StopPodSandbox for \"5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570\" returns successfully" Jun 20 19:18:42.716311 containerd[1721]: time="2025-06-20T19:18:42.716278710Z" level=info msg="TearDown network for sandbox \"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\" successfully" Jun 20 19:18:42.716311 containerd[1721]: time="2025-06-20T19:18:42.716305566Z" level=info msg="StopPodSandbox for \"792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2\" returns successfully" Jun 20 19:18:42.717612 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-792c2cbc8bbbc7211764636fc0af42140c127b7109b00374c6e912ed19ac61b2-shm.mount: Deactivated successfully. Jun 20 19:18:42.829839 kubelet[3184]: I0620 19:18:42.829708 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-xtables-lock\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.829839 kubelet[3184]: I0620 19:18:42.829750 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-etc-cni-netd\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.829839 kubelet[3184]: I0620 19:18:42.829779 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-config-path\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.829839 kubelet[3184]: I0620 19:18:42.829799 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e5b77a2-df05-4f69-b053-039d452cf80e-clustermesh-secrets\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.829839 kubelet[3184]: I0620 19:18:42.829824 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llljq\" (UniqueName: \"kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-kube-api-access-llljq\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.830362 kubelet[3184]: I0620 19:18:42.829844 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-hubble-tls\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.830362 kubelet[3184]: I0620 19:18:42.829863 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22059cc1-0f28-493a-943b-1880642d788c-cilium-config-path\") pod \"22059cc1-0f28-493a-943b-1880642d788c\" (UID: \"22059cc1-0f28-493a-943b-1880642d788c\") " Jun 20 19:18:42.830362 kubelet[3184]: I0620 19:18:42.829878 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-run\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.830362 kubelet[3184]: I0620 19:18:42.829893 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pcgb\" (UniqueName: \"kubernetes.io/projected/22059cc1-0f28-493a-943b-1880642d788c-kube-api-access-8pcgb\") pod \"22059cc1-0f28-493a-943b-1880642d788c\" (UID: \"22059cc1-0f28-493a-943b-1880642d788c\") " Jun 20 19:18:42.830362 kubelet[3184]: I0620 19:18:42.829911 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-host-proc-sys-net\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.830362 kubelet[3184]: I0620 19:18:42.829925 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cni-path\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.831629 kubelet[3184]: I0620 19:18:42.829941 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-bpf-maps\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.831629 kubelet[3184]: I0620 19:18:42.829956 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-hostproc\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.831629 kubelet[3184]: I0620 19:18:42.829978 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-lib-modules\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.831629 kubelet[3184]: I0620 19:18:42.829994 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-host-proc-sys-kernel\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.831629 kubelet[3184]: I0620 19:18:42.830011 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-cgroup\") pod \"8e5b77a2-df05-4f69-b053-039d452cf80e\" (UID: \"8e5b77a2-df05-4f69-b053-039d452cf80e\") " Jun 20 19:18:42.831629 kubelet[3184]: I0620 19:18:42.830084 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.831802 kubelet[3184]: I0620 19:18:42.830124 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.831802 kubelet[3184]: I0620 19:18:42.830137 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.832024 kubelet[3184]: I0620 19:18:42.831974 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.832079 kubelet[3184]: I0620 19:18:42.832070 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cni-path" (OuterVolumeSpecName: "cni-path") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.832141 kubelet[3184]: I0620 19:18:42.832125 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.832248 kubelet[3184]: I0620 19:18:42.832180 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-hostproc" (OuterVolumeSpecName: "hostproc") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.832248 kubelet[3184]: I0620 19:18:42.832198 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.832248 kubelet[3184]: I0620 19:18:42.832212 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.835379 kubelet[3184]: I0620 19:18:42.835154 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22059cc1-0f28-493a-943b-1880642d788c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "22059cc1-0f28-493a-943b-1880642d788c" (UID: "22059cc1-0f28-493a-943b-1880642d788c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:18:42.835379 kubelet[3184]: I0620 19:18:42.835216 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:18:42.837281 kubelet[3184]: I0620 19:18:42.837249 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:18:42.837639 kubelet[3184]: I0620 19:18:42.837615 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22059cc1-0f28-493a-943b-1880642d788c-kube-api-access-8pcgb" (OuterVolumeSpecName: "kube-api-access-8pcgb") pod "22059cc1-0f28-493a-943b-1880642d788c" (UID: "22059cc1-0f28-493a-943b-1880642d788c"). InnerVolumeSpecName "kube-api-access-8pcgb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:18:42.838714 kubelet[3184]: I0620 19:18:42.838688 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:18:42.838928 kubelet[3184]: I0620 19:18:42.838913 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-kube-api-access-llljq" (OuterVolumeSpecName: "kube-api-access-llljq") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "kube-api-access-llljq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:18:42.839130 kubelet[3184]: I0620 19:18:42.839113 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e5b77a2-df05-4f69-b053-039d452cf80e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8e5b77a2-df05-4f69-b053-039d452cf80e" (UID: "8e5b77a2-df05-4f69-b053-039d452cf80e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:18:42.930543 kubelet[3184]: I0620 19:18:42.930473 3184 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-hubble-tls\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930543 kubelet[3184]: I0620 19:18:42.930538 3184 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22059cc1-0f28-493a-943b-1880642d788c-cilium-config-path\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930543 kubelet[3184]: I0620 19:18:42.930550 3184 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-run\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930804 kubelet[3184]: I0620 19:18:42.930561 3184 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pcgb\" (UniqueName: \"kubernetes.io/projected/22059cc1-0f28-493a-943b-1880642d788c-kube-api-access-8pcgb\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930804 kubelet[3184]: I0620 19:18:42.930574 3184 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-host-proc-sys-net\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930804 kubelet[3184]: I0620 19:18:42.930588 3184 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cni-path\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930804 kubelet[3184]: I0620 19:18:42.930598 3184 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-bpf-maps\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930804 kubelet[3184]: I0620 19:18:42.930608 3184 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-hostproc\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930804 kubelet[3184]: I0620 19:18:42.930621 3184 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-lib-modules\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930804 kubelet[3184]: I0620 19:18:42.930631 3184 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-host-proc-sys-kernel\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930804 kubelet[3184]: I0620 19:18:42.930641 3184 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-cgroup\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930966 kubelet[3184]: I0620 19:18:42.930653 3184 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-xtables-lock\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930966 kubelet[3184]: I0620 19:18:42.930662 3184 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e5b77a2-df05-4f69-b053-039d452cf80e-etc-cni-netd\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930966 kubelet[3184]: I0620 19:18:42.930674 3184 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e5b77a2-df05-4f69-b053-039d452cf80e-cilium-config-path\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930966 kubelet[3184]: I0620 19:18:42.930686 3184 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e5b77a2-df05-4f69-b053-039d452cf80e-clustermesh-secrets\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:42.930966 kubelet[3184]: I0620 19:18:42.930698 3184 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-llljq\" (UniqueName: \"kubernetes.io/projected/8e5b77a2-df05-4f69-b053-039d452cf80e-kube-api-access-llljq\") on node \"ci-4344.1.0-a-324c5119a7\" DevicePath \"\"" Jun 20 19:18:43.367151 kubelet[3184]: I0620 19:18:43.367117 3184 scope.go:117] "RemoveContainer" containerID="cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335" Jun 20 19:18:43.370107 containerd[1721]: time="2025-06-20T19:18:43.370003598Z" level=info msg="RemoveContainer for \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\"" Jun 20 19:18:43.375110 systemd[1]: Removed slice kubepods-burstable-pod8e5b77a2_df05_4f69_b053_039d452cf80e.slice - libcontainer container kubepods-burstable-pod8e5b77a2_df05_4f69_b053_039d452cf80e.slice. Jun 20 19:18:43.375242 systemd[1]: kubepods-burstable-pod8e5b77a2_df05_4f69_b053_039d452cf80e.slice: Consumed 5.672s CPU time, 124.4M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 19:18:43.379937 systemd[1]: Removed slice kubepods-besteffort-pod22059cc1_0f28_493a_943b_1880642d788c.slice - libcontainer container kubepods-besteffort-pod22059cc1_0f28_493a_943b_1880642d788c.slice. Jun 20 19:18:43.383202 containerd[1721]: time="2025-06-20T19:18:43.383155585Z" level=info msg="RemoveContainer for \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" returns successfully" Jun 20 19:18:43.383990 kubelet[3184]: I0620 19:18:43.383956 3184 scope.go:117] "RemoveContainer" containerID="2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368" Jun 20 19:18:43.385622 containerd[1721]: time="2025-06-20T19:18:43.385595982Z" level=info msg="RemoveContainer for \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\"" Jun 20 19:18:43.394803 containerd[1721]: time="2025-06-20T19:18:43.394715826Z" level=info msg="RemoveContainer for \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\" returns successfully" Jun 20 19:18:43.395212 kubelet[3184]: I0620 19:18:43.395151 3184 scope.go:117] "RemoveContainer" containerID="d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b" Jun 20 19:18:43.398900 containerd[1721]: time="2025-06-20T19:18:43.398810583Z" level=info msg="RemoveContainer for \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\"" Jun 20 19:18:43.408624 containerd[1721]: time="2025-06-20T19:18:43.408591979Z" level=info msg="RemoveContainer for \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\" returns successfully" Jun 20 19:18:43.408771 kubelet[3184]: I0620 19:18:43.408747 3184 scope.go:117] "RemoveContainer" containerID="57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6" Jun 20 19:18:43.410073 containerd[1721]: time="2025-06-20T19:18:43.410036127Z" level=info msg="RemoveContainer for \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\"" Jun 20 19:18:43.416602 containerd[1721]: time="2025-06-20T19:18:43.416578220Z" level=info msg="RemoveContainer for \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\" returns successfully" Jun 20 19:18:43.416759 kubelet[3184]: I0620 19:18:43.416744 3184 scope.go:117] "RemoveContainer" containerID="192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18" Jun 20 19:18:43.417910 containerd[1721]: time="2025-06-20T19:18:43.417890346Z" level=info msg="RemoveContainer for \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\"" Jun 20 19:18:43.425667 containerd[1721]: time="2025-06-20T19:18:43.425642355Z" level=info msg="RemoveContainer for \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\" returns successfully" Jun 20 19:18:43.425831 kubelet[3184]: I0620 19:18:43.425815 3184 scope.go:117] "RemoveContainer" containerID="cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335" Jun 20 19:18:43.426043 containerd[1721]: time="2025-06-20T19:18:43.426006453Z" level=error msg="ContainerStatus for \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\": not found" Jun 20 19:18:43.426175 kubelet[3184]: E0620 19:18:43.426155 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\": not found" containerID="cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335" Jun 20 19:18:43.426231 kubelet[3184]: I0620 19:18:43.426182 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335"} err="failed to get container status \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\": rpc error: code = NotFound desc = an error occurred when try to find container \"cff58d65319e4756421aa28559934241f5c44105d150c515759b3f7f84d8e335\": not found" Jun 20 19:18:43.426262 kubelet[3184]: I0620 19:18:43.426236 3184 scope.go:117] "RemoveContainer" containerID="2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368" Jun 20 19:18:43.426411 containerd[1721]: time="2025-06-20T19:18:43.426379484Z" level=error msg="ContainerStatus for \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\": not found" Jun 20 19:18:43.426510 kubelet[3184]: E0620 19:18:43.426473 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\": not found" containerID="2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368" Jun 20 19:18:43.426546 kubelet[3184]: I0620 19:18:43.426506 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368"} err="failed to get container status \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ce55488c489d97f3d7415905616e22a4442323d59d8ff1f3bd3821a53387368\": not found" Jun 20 19:18:43.426546 kubelet[3184]: I0620 19:18:43.426523 3184 scope.go:117] "RemoveContainer" containerID="d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b" Jun 20 19:18:43.426700 containerd[1721]: time="2025-06-20T19:18:43.426672405Z" level=error msg="ContainerStatus for \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\": not found" Jun 20 19:18:43.426775 kubelet[3184]: E0620 19:18:43.426759 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\": not found" containerID="d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b" Jun 20 19:18:43.426805 kubelet[3184]: I0620 19:18:43.426777 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b"} err="failed to get container status \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d020e05c7ec67701de9af7a6a3036c8fea6d9dc8f2fb65667c2fa3d2c71cdc0b\": not found" Jun 20 19:18:43.426805 kubelet[3184]: I0620 19:18:43.426790 3184 scope.go:117] "RemoveContainer" containerID="57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6" Jun 20 19:18:43.426959 containerd[1721]: time="2025-06-20T19:18:43.426931931Z" level=error msg="ContainerStatus for \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\": not found" Jun 20 19:18:43.427036 kubelet[3184]: E0620 19:18:43.427021 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\": not found" containerID="57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6" Jun 20 19:18:43.427065 kubelet[3184]: I0620 19:18:43.427040 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6"} err="failed to get container status \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\": rpc error: code = NotFound desc = an error occurred when try to find container \"57c5e221f5dbf57cef2a72e78c30ca1f2977549409957caa71e80a68f2420ad6\": not found" Jun 20 19:18:43.427065 kubelet[3184]: I0620 19:18:43.427053 3184 scope.go:117] "RemoveContainer" containerID="192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18" Jun 20 19:18:43.427212 containerd[1721]: time="2025-06-20T19:18:43.427171466Z" level=error msg="ContainerStatus for \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\": not found" Jun 20 19:18:43.427296 kubelet[3184]: E0620 19:18:43.427281 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\": not found" containerID="192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18" Jun 20 19:18:43.427369 kubelet[3184]: I0620 19:18:43.427298 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18"} err="failed to get container status \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\": rpc error: code = NotFound desc = an error occurred when try to find container \"192ea6916b38415a3dc92773c1cc117986c8bdc7011f224d30fc82e6dcdc3f18\": not found" Jun 20 19:18:43.427369 kubelet[3184]: I0620 19:18:43.427311 3184 scope.go:117] "RemoveContainer" containerID="bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97" Jun 20 19:18:43.428460 containerd[1721]: time="2025-06-20T19:18:43.428428282Z" level=info msg="RemoveContainer for \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\"" Jun 20 19:18:43.434303 containerd[1721]: time="2025-06-20T19:18:43.434280250Z" level=info msg="RemoveContainer for \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" returns successfully" Jun 20 19:18:43.434479 kubelet[3184]: I0620 19:18:43.434445 3184 scope.go:117] "RemoveContainer" containerID="bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97" Jun 20 19:18:43.434687 containerd[1721]: time="2025-06-20T19:18:43.434662370Z" level=error msg="ContainerStatus for \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\": not found" Jun 20 19:18:43.434810 kubelet[3184]: E0620 19:18:43.434790 3184 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\": not found" containerID="bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97" Jun 20 19:18:43.434867 kubelet[3184]: I0620 19:18:43.434812 3184 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97"} err="failed to get container status \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\": rpc error: code = NotFound desc = an error occurred when try to find container \"bea5f457845f3c9cbb7b54e56ff28ac8272571dbeb961e446f1adee45dcdfb97\": not found" Jun 20 19:18:43.542719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fae049058dcb40d8e903accce2905084c1d0a6c2afb288af6f207fcd105c570-shm.mount: Deactivated successfully. Jun 20 19:18:43.542822 systemd[1]: var-lib-kubelet-pods-8e5b77a2\x2ddf05\x2d4f69\x2db053\x2d039d452cf80e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dllljq.mount: Deactivated successfully. Jun 20 19:18:43.542897 systemd[1]: var-lib-kubelet-pods-22059cc1\x2d0f28\x2d493a\x2d943b\x2d1880642d788c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8pcgb.mount: Deactivated successfully. Jun 20 19:18:43.542955 systemd[1]: var-lib-kubelet-pods-8e5b77a2\x2ddf05\x2d4f69\x2db053\x2d039d452cf80e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:18:43.543020 systemd[1]: var-lib-kubelet-pods-8e5b77a2\x2ddf05\x2d4f69\x2db053\x2d039d452cf80e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:18:44.023900 kubelet[3184]: I0620 19:18:44.023811 3184 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22059cc1-0f28-493a-943b-1880642d788c" path="/var/lib/kubelet/pods/22059cc1-0f28-493a-943b-1880642d788c/volumes" Jun 20 19:18:44.024328 kubelet[3184]: I0620 19:18:44.024176 3184 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e5b77a2-df05-4f69-b053-039d452cf80e" path="/var/lib/kubelet/pods/8e5b77a2-df05-4f69-b053-039d452cf80e/volumes" Jun 20 19:18:44.522716 sshd[4700]: Connection closed by 10.200.16.10 port 39468 Jun 20 19:18:44.523577 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:44.527356 systemd[1]: sshd@20-10.200.4.8:22-10.200.16.10:39468.service: Deactivated successfully. Jun 20 19:18:44.529308 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:18:44.530268 systemd-logind[1696]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:18:44.531607 systemd-logind[1696]: Removed session 23. Jun 20 19:18:44.629887 systemd[1]: Started sshd@21-10.200.4.8:22-10.200.16.10:39478.service - OpenSSH per-connection server daemon (10.200.16.10:39478). Jun 20 19:18:45.225809 sshd[4854]: Accepted publickey for core from 10.200.16.10 port 39478 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:45.227076 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:45.231691 systemd-logind[1696]: New session 24 of user core. Jun 20 19:18:45.233686 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:18:46.038671 systemd[1]: Created slice kubepods-burstable-podba896be0_3455_4fd8_8abc_5f0faafaea73.slice - libcontainer container kubepods-burstable-podba896be0_3455_4fd8_8abc_5f0faafaea73.slice. Jun 20 19:18:46.107683 sshd[4856]: Connection closed by 10.200.16.10 port 39478 Jun 20 19:18:46.109554 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:46.116876 systemd[1]: sshd@21-10.200.4.8:22-10.200.16.10:39478.service: Deactivated successfully. Jun 20 19:18:46.121378 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:18:46.125027 systemd-logind[1696]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:18:46.127752 systemd-logind[1696]: Removed session 24. Jun 20 19:18:46.134410 kubelet[3184]: E0620 19:18:46.134377 3184 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:18:46.149770 kubelet[3184]: I0620 19:18:46.149735 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-hostproc\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.149884 kubelet[3184]: I0620 19:18:46.149781 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-host-proc-sys-net\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.149884 kubelet[3184]: I0620 19:18:46.149800 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-cilium-cgroup\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.149884 kubelet[3184]: I0620 19:18:46.149817 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-bpf-maps\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.149884 kubelet[3184]: I0620 19:18:46.149837 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-xtables-lock\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.149884 kubelet[3184]: I0620 19:18:46.149854 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba896be0-3455-4fd8-8abc-5f0faafaea73-cilium-config-path\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.149884 kubelet[3184]: I0620 19:18:46.149871 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba896be0-3455-4fd8-8abc-5f0faafaea73-hubble-tls\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.150035 kubelet[3184]: I0620 19:18:46.149888 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-lib-modules\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.150035 kubelet[3184]: I0620 19:18:46.149906 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba896be0-3455-4fd8-8abc-5f0faafaea73-clustermesh-secrets\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.150035 kubelet[3184]: I0620 19:18:46.149924 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ba896be0-3455-4fd8-8abc-5f0faafaea73-cilium-ipsec-secrets\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.150035 kubelet[3184]: I0620 19:18:46.149942 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-cni-path\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.150035 kubelet[3184]: I0620 19:18:46.149959 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-etc-cni-netd\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.150035 kubelet[3184]: I0620 19:18:46.149975 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-cilium-run\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.150176 kubelet[3184]: I0620 19:18:46.149995 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba896be0-3455-4fd8-8abc-5f0faafaea73-host-proc-sys-kernel\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.150176 kubelet[3184]: I0620 19:18:46.150014 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4r7m\" (UniqueName: \"kubernetes.io/projected/ba896be0-3455-4fd8-8abc-5f0faafaea73-kube-api-access-n4r7m\") pod \"cilium-256dm\" (UID: \"ba896be0-3455-4fd8-8abc-5f0faafaea73\") " pod="kube-system/cilium-256dm" Jun 20 19:18:46.218057 systemd[1]: Started sshd@22-10.200.4.8:22-10.200.16.10:39486.service - OpenSSH per-connection server daemon (10.200.16.10:39486). Jun 20 19:18:46.342323 containerd[1721]: time="2025-06-20T19:18:46.342282470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-256dm,Uid:ba896be0-3455-4fd8-8abc-5f0faafaea73,Namespace:kube-system,Attempt:0,}" Jun 20 19:18:46.383439 containerd[1721]: time="2025-06-20T19:18:46.383379795Z" level=info msg="connecting to shim 18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605" address="unix:///run/containerd/s/cf66b04f7250ffe1fc217c76c9b10a8d908e79c5721e5bd6f13120f96313d136" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:46.402692 systemd[1]: Started cri-containerd-18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605.scope - libcontainer container 18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605. Jun 20 19:18:46.427673 containerd[1721]: time="2025-06-20T19:18:46.427633706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-256dm,Uid:ba896be0-3455-4fd8-8abc-5f0faafaea73,Namespace:kube-system,Attempt:0,} returns sandbox id \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\"" Jun 20 19:18:46.436942 containerd[1721]: time="2025-06-20T19:18:46.436904279Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:18:46.448882 containerd[1721]: time="2025-06-20T19:18:46.448847042Z" level=info msg="Container 1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:46.460111 containerd[1721]: time="2025-06-20T19:18:46.460076772Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94\"" Jun 20 19:18:46.460593 containerd[1721]: time="2025-06-20T19:18:46.460568486Z" level=info msg="StartContainer for \"1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94\"" Jun 20 19:18:46.461661 containerd[1721]: time="2025-06-20T19:18:46.461626141Z" level=info msg="connecting to shim 1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94" address="unix:///run/containerd/s/cf66b04f7250ffe1fc217c76c9b10a8d908e79c5721e5bd6f13120f96313d136" protocol=ttrpc version=3 Jun 20 19:18:46.480652 systemd[1]: Started cri-containerd-1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94.scope - libcontainer container 1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94. Jun 20 19:18:46.508787 containerd[1721]: time="2025-06-20T19:18:46.508727534Z" level=info msg="StartContainer for \"1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94\" returns successfully" Jun 20 19:18:46.512056 systemd[1]: cri-containerd-1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94.scope: Deactivated successfully. Jun 20 19:18:46.513992 containerd[1721]: time="2025-06-20T19:18:46.513923435Z" level=info msg="received exit event container_id:\"1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94\" id:\"1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94\" pid:4930 exited_at:{seconds:1750447126 nanos:513445325}" Jun 20 19:18:46.514139 containerd[1721]: time="2025-06-20T19:18:46.514095556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94\" id:\"1a4e3bb442b524826434f75c77822b64861472c1f6c373fad29977b4be426e94\" pid:4930 exited_at:{seconds:1750447126 nanos:513445325}" Jun 20 19:18:46.815797 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 39486 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:46.817001 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:46.821554 systemd-logind[1696]: New session 25 of user core. Jun 20 19:18:46.825689 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:18:47.236639 sshd[4963]: Connection closed by 10.200.16.10 port 39486 Jun 20 19:18:47.237230 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:47.239999 systemd[1]: sshd@22-10.200.4.8:22-10.200.16.10:39486.service: Deactivated successfully. Jun 20 19:18:47.241985 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:18:47.244233 systemd-logind[1696]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:18:47.245285 systemd-logind[1696]: Removed session 25. Jun 20 19:18:47.343001 systemd[1]: Started sshd@23-10.200.4.8:22-10.200.16.10:39498.service - OpenSSH per-connection server daemon (10.200.16.10:39498). Jun 20 19:18:47.391096 containerd[1721]: time="2025-06-20T19:18:47.391036746Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:18:47.410674 containerd[1721]: time="2025-06-20T19:18:47.410558865Z" level=info msg="Container 71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:47.425737 containerd[1721]: time="2025-06-20T19:18:47.425697838Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68\"" Jun 20 19:18:47.426253 containerd[1721]: time="2025-06-20T19:18:47.426219646Z" level=info msg="StartContainer for \"71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68\"" Jun 20 19:18:47.427269 containerd[1721]: time="2025-06-20T19:18:47.427239446Z" level=info msg="connecting to shim 71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68" address="unix:///run/containerd/s/cf66b04f7250ffe1fc217c76c9b10a8d908e79c5721e5bd6f13120f96313d136" protocol=ttrpc version=3 Jun 20 19:18:47.447665 systemd[1]: Started cri-containerd-71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68.scope - libcontainer container 71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68. Jun 20 19:18:47.476888 containerd[1721]: time="2025-06-20T19:18:47.476791973Z" level=info msg="StartContainer for \"71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68\" returns successfully" Jun 20 19:18:47.479741 systemd[1]: cri-containerd-71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68.scope: Deactivated successfully. Jun 20 19:18:47.480713 containerd[1721]: time="2025-06-20T19:18:47.480411349Z" level=info msg="received exit event container_id:\"71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68\" id:\"71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68\" pid:4985 exited_at:{seconds:1750447127 nanos:480130471}" Jun 20 19:18:47.481550 containerd[1721]: time="2025-06-20T19:18:47.481248000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68\" id:\"71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68\" pid:4985 exited_at:{seconds:1750447127 nanos:480130471}" Jun 20 19:18:47.497967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71c3f66155fbdd48c787cc916f55aaad599cbbefc3b104231d4d993001538a68-rootfs.mount: Deactivated successfully. Jun 20 19:18:47.938770 sshd[4970]: Accepted publickey for core from 10.200.16.10 port 39498 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:18:47.940238 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:18:47.945036 systemd-logind[1696]: New session 26 of user core. Jun 20 19:18:47.949654 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:18:48.395875 containerd[1721]: time="2025-06-20T19:18:48.395833903Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:18:48.415756 containerd[1721]: time="2025-06-20T19:18:48.415154114Z" level=info msg="Container a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:48.420156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742223784.mount: Deactivated successfully. Jun 20 19:18:48.435300 containerd[1721]: time="2025-06-20T19:18:48.435262064Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac\"" Jun 20 19:18:48.435803 containerd[1721]: time="2025-06-20T19:18:48.435767667Z" level=info msg="StartContainer for \"a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac\"" Jun 20 19:18:48.437330 containerd[1721]: time="2025-06-20T19:18:48.437279779Z" level=info msg="connecting to shim a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac" address="unix:///run/containerd/s/cf66b04f7250ffe1fc217c76c9b10a8d908e79c5721e5bd6f13120f96313d136" protocol=ttrpc version=3 Jun 20 19:18:48.457674 systemd[1]: Started cri-containerd-a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac.scope - libcontainer container a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac. Jun 20 19:18:48.486248 systemd[1]: cri-containerd-a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac.scope: Deactivated successfully. Jun 20 19:18:48.488353 containerd[1721]: time="2025-06-20T19:18:48.488310582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac\" id:\"a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac\" pid:5035 exited_at:{seconds:1750447128 nanos:487326990}" Jun 20 19:18:48.489051 containerd[1721]: time="2025-06-20T19:18:48.488902980Z" level=info msg="received exit event container_id:\"a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac\" id:\"a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac\" pid:5035 exited_at:{seconds:1750447128 nanos:487326990}" Jun 20 19:18:48.497107 containerd[1721]: time="2025-06-20T19:18:48.497051141Z" level=info msg="StartContainer for \"a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac\" returns successfully" Jun 20 19:18:48.508635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3b8146af00e1150b778fb869e77908d85239e992beb1d3104e6ac0393c405ac-rootfs.mount: Deactivated successfully. Jun 20 19:18:49.235766 kubelet[3184]: I0620 19:18:49.235631 3184 setters.go:618] "Node became not ready" node="ci-4344.1.0-a-324c5119a7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:18:49Z","lastTransitionTime":"2025-06-20T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:18:49.399811 containerd[1721]: time="2025-06-20T19:18:49.399703543Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:18:49.421880 containerd[1721]: time="2025-06-20T19:18:49.420606870Z" level=info msg="Container f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:49.436657 containerd[1721]: time="2025-06-20T19:18:49.436617041Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4\"" Jun 20 19:18:49.437148 containerd[1721]: time="2025-06-20T19:18:49.437121906Z" level=info msg="StartContainer for \"f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4\"" Jun 20 19:18:49.439545 containerd[1721]: time="2025-06-20T19:18:49.438466067Z" level=info msg="connecting to shim f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4" address="unix:///run/containerd/s/cf66b04f7250ffe1fc217c76c9b10a8d908e79c5721e5bd6f13120f96313d136" protocol=ttrpc version=3 Jun 20 19:18:49.462645 systemd[1]: Started cri-containerd-f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4.scope - libcontainer container f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4. Jun 20 19:18:49.485263 systemd[1]: cri-containerd-f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4.scope: Deactivated successfully. Jun 20 19:18:49.488215 containerd[1721]: time="2025-06-20T19:18:49.486237103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4\" id:\"f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4\" pid:5075 exited_at:{seconds:1750447129 nanos:485782516}" Jun 20 19:18:49.489854 containerd[1721]: time="2025-06-20T19:18:49.489721568Z" level=info msg="received exit event container_id:\"f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4\" id:\"f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4\" pid:5075 exited_at:{seconds:1750447129 nanos:485782516}" Jun 20 19:18:49.495709 containerd[1721]: time="2025-06-20T19:18:49.495681080Z" level=info msg="StartContainer for \"f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4\" returns successfully" Jun 20 19:18:49.506424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5dd89557e861ca832cc22ffdb91a4dedd598750892a3d9766a9ff976c7e2bb4-rootfs.mount: Deactivated successfully. Jun 20 19:18:50.405516 containerd[1721]: time="2025-06-20T19:18:50.404791019Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:18:50.423684 containerd[1721]: time="2025-06-20T19:18:50.423548014Z" level=info msg="Container 3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:50.436038 containerd[1721]: time="2025-06-20T19:18:50.435996997Z" level=info msg="CreateContainer within sandbox \"18a1dd4eb35fa34acddf3e92869d9fb790847d3d2ec6a3c1c1b44d6a4b93e605\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c\"" Jun 20 19:18:50.436526 containerd[1721]: time="2025-06-20T19:18:50.436476174Z" level=info msg="StartContainer for \"3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c\"" Jun 20 19:18:50.437519 containerd[1721]: time="2025-06-20T19:18:50.437470442Z" level=info msg="connecting to shim 3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c" address="unix:///run/containerd/s/cf66b04f7250ffe1fc217c76c9b10a8d908e79c5721e5bd6f13120f96313d136" protocol=ttrpc version=3 Jun 20 19:18:50.461653 systemd[1]: Started cri-containerd-3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c.scope - libcontainer container 3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c. Jun 20 19:18:50.494670 containerd[1721]: time="2025-06-20T19:18:50.494633128Z" level=info msg="StartContainer for \"3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c\" returns successfully" Jun 20 19:18:50.564040 containerd[1721]: time="2025-06-20T19:18:50.563993205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c\" id:\"bcfe2898c82140852a053b924b67d6ea98d97ecb336e5a28fd9b9b6892398a5c\" pid:5142 exited_at:{seconds:1750447130 nanos:563684284}" Jun 20 19:18:50.853542 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jun 20 19:18:52.444343 containerd[1721]: time="2025-06-20T19:18:52.444283155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c\" id:\"985bde505329b094dfa2d48227e9cf75071bd746a3fbbaf6582cb76f672ebffe\" pid:5293 exit_status:1 exited_at:{seconds:1750447132 nanos:443581393}" Jun 20 19:18:53.428010 systemd-networkd[1355]: lxc_health: Link UP Jun 20 19:18:53.443760 systemd-networkd[1355]: lxc_health: Gained carrier Jun 20 19:18:54.370349 kubelet[3184]: I0620 19:18:54.369278 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-256dm" podStartSLOduration=8.369259829 podStartE2EDuration="8.369259829s" podCreationTimestamp="2025-06-20 19:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:18:51.420658173 +0000 UTC m=+155.502073709" watchObservedRunningTime="2025-06-20 19:18:54.369259829 +0000 UTC m=+158.450675363" Jun 20 19:18:54.732837 containerd[1721]: time="2025-06-20T19:18:54.731921429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c\" id:\"cd5efde21c0bcc298e9e5766583de397e76f4b16474282060f44c8f7d8c3b835\" pid:5674 exited_at:{seconds:1750447134 nanos:731100791}" Jun 20 19:18:54.737541 kubelet[3184]: E0620 19:18:54.737444 3184 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40092->127.0.0.1:46231: write tcp 127.0.0.1:40092->127.0.0.1:46231: write: broken pipe Jun 20 19:18:55.426705 systemd-networkd[1355]: lxc_health: Gained IPv6LL Jun 20 19:18:56.861718 containerd[1721]: time="2025-06-20T19:18:56.861672086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c\" id:\"cd34ebe20c8ff3f1a37208059a2b7c071ea396965bba09a5f87c969ebc9cbb71\" pid:5704 exited_at:{seconds:1750447136 nanos:861217991}" Jun 20 19:18:58.947297 containerd[1721]: time="2025-06-20T19:18:58.947051689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bd6d52fbfe03a1fbe573d59069ab696fa9c9e89daf4765438383477543a7d6c\" id:\"b0132ea3677eb95e14b33e6162d6580b29878f1ff18bd13bdd7ab46435ae7343\" pid:5732 exited_at:{seconds:1750447138 nanos:946646419}" Jun 20 19:18:59.045112 sshd[5016]: Connection closed by 10.200.16.10 port 39498 Jun 20 19:18:59.045824 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:59.049599 systemd[1]: sshd@23-10.200.4.8:22-10.200.16.10:39498.service: Deactivated successfully. Jun 20 19:18:59.051210 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:18:59.052138 systemd-logind[1696]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:18:59.053411 systemd-logind[1696]: Removed session 26.