Jul 10 00:23:28.970554 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:23:28.970582 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:28.970593 kernel: BIOS-provided physical RAM map: Jul 10 00:23:28.970599 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:23:28.970606 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 10 00:23:28.970612 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 10 00:23:28.970620 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 10 00:23:28.970629 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 10 00:23:28.970635 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 10 00:23:28.970642 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 10 00:23:28.970649 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 10 00:23:28.970655 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 10 00:23:28.970662 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 10 00:23:28.970669 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 10 00:23:28.974037 kernel: NX (Execute Disable) protection: active Jul 10 00:23:28.974052 kernel: APIC: Static calls initialized Jul 10 00:23:28.974060 kernel: efi: EFI v2.7 by Microsoft Jul 10 00:23:28.974068 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Jul 10 00:23:28.974076 kernel: random: crng init done Jul 10 00:23:28.974083 kernel: secureboot: Secure boot disabled Jul 10 00:23:28.974091 kernel: SMBIOS 3.1.0 present. Jul 10 00:23:28.974099 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 10 00:23:28.974106 kernel: DMI: Memory slots populated: 2/2 Jul 10 00:23:28.974115 kernel: Hypervisor detected: Microsoft Hyper-V Jul 10 00:23:28.974123 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 10 00:23:28.974130 kernel: Hyper-V: Nested features: 0x3e0101 Jul 10 00:23:28.974138 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 10 00:23:28.974145 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 10 00:23:28.974153 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 10 00:23:28.974160 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 10 00:23:28.974167 kernel: tsc: Detected 2300.000 MHz processor Jul 10 00:23:28.974175 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:23:28.974184 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:23:28.974194 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 10 00:23:28.974202 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:23:28.974210 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:23:28.974218 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 10 00:23:28.974226 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 10 00:23:28.974234 kernel: Using GB pages for direct mapping Jul 10 00:23:28.974242 kernel: ACPI: Early table checksum verification disabled Jul 10 00:23:28.974253 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 10 00:23:28.974263 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:28.974271 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:28.974279 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 10 00:23:28.974287 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 10 00:23:28.974295 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:28.974303 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:28.974313 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:28.974321 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 10 00:23:28.974329 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 10 00:23:28.974337 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:28.974346 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 10 00:23:28.974354 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 10 00:23:28.974361 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 10 00:23:28.974369 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 10 00:23:28.974377 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 10 00:23:28.974387 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 10 00:23:28.974395 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 10 00:23:28.974403 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 10 00:23:28.974411 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 10 00:23:28.974419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 10 00:23:28.974427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 10 00:23:28.974435 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 10 00:23:28.974444 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 10 00:23:28.974451 kernel: Zone ranges: Jul 10 00:23:28.974461 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:23:28.974469 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 10 00:23:28.974477 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 10 00:23:28.974485 kernel: Device empty Jul 10 00:23:28.974493 kernel: Movable zone start for each node Jul 10 00:23:28.974501 kernel: Early memory node ranges Jul 10 00:23:28.974509 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 00:23:28.974517 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 10 00:23:28.974525 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 10 00:23:28.974535 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 10 00:23:28.974543 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 10 00:23:28.974551 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 10 00:23:28.974559 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:23:28.974567 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 00:23:28.974575 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 10 00:23:28.974583 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 10 00:23:28.974591 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 10 00:23:28.974599 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:23:28.974608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:23:28.974617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:23:28.974625 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 10 00:23:28.974633 kernel: TSC deadline timer available Jul 10 00:23:28.974641 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:23:28.974649 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:23:28.974656 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:23:28.974664 kernel: CPU topo: Max. threads per core: 2 Jul 10 00:23:28.974672 kernel: CPU topo: Num. cores per package: 1 Jul 10 00:23:28.974698 kernel: CPU topo: Num. threads per package: 2 Jul 10 00:23:28.974706 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 00:23:28.974714 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 10 00:23:28.974722 kernel: Booting paravirtualized kernel on Hyper-V Jul 10 00:23:28.974730 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:23:28.974738 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 00:23:28.974746 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 00:23:28.974754 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 00:23:28.974762 kernel: pcpu-alloc: [0] 0 1 Jul 10 00:23:28.974772 kernel: Hyper-V: PV spinlocks enabled Jul 10 00:23:28.974780 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:23:28.974789 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:28.974797 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:23:28.974804 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 10 00:23:28.974817 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:23:28.974830 kernel: Fallback order for Node 0: 0 Jul 10 00:23:28.974837 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 10 00:23:28.974847 kernel: Policy zone: Normal Jul 10 00:23:28.974854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:23:28.974861 kernel: software IO TLB: area num 2. Jul 10 00:23:28.974869 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:23:28.974877 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:23:28.974885 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:23:28.974892 kernel: Dynamic Preempt: voluntary Jul 10 00:23:28.974899 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:23:28.974906 kernel: rcu: RCU event tracing is enabled. Jul 10 00:23:28.974922 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:23:28.974931 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:23:28.974939 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:23:28.974949 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:23:28.974956 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:23:28.974965 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:23:28.974973 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:28.974980 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:28.974991 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:28.975003 kernel: Using NULL legacy PIC Jul 10 00:23:28.975012 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 10 00:23:28.975020 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:23:28.975028 kernel: Console: colour dummy device 80x25 Jul 10 00:23:28.975036 kernel: printk: legacy console [tty1] enabled Jul 10 00:23:28.975044 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:23:28.975052 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 10 00:23:28.975060 kernel: ACPI: Core revision 20240827 Jul 10 00:23:28.975069 kernel: Failed to register legacy timer interrupt Jul 10 00:23:28.975078 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:23:28.975086 kernel: x2apic enabled Jul 10 00:23:28.975095 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:23:28.975103 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 10 00:23:28.975111 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 10 00:23:28.975120 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 10 00:23:28.975129 kernel: Hyper-V: Using IPI hypercalls Jul 10 00:23:28.975138 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 10 00:23:28.975148 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 10 00:23:28.975157 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 10 00:23:28.975166 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 10 00:23:28.975175 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 10 00:23:28.975184 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 10 00:23:28.975193 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 10 00:23:28.975202 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jul 10 00:23:28.975211 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:23:28.975221 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 10 00:23:28.975230 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 10 00:23:28.975238 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:23:28.975247 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:23:28.975255 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:23:28.975264 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 10 00:23:28.975273 kernel: RETBleed: Vulnerable Jul 10 00:23:28.975281 kernel: Speculative Store Bypass: Vulnerable Jul 10 00:23:28.975290 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:23:28.975298 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:23:28.975307 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:23:28.975317 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:23:28.975325 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 10 00:23:28.975333 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 10 00:23:28.975342 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 10 00:23:28.975350 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 10 00:23:28.975359 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 10 00:23:28.975367 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 10 00:23:28.975376 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:23:28.975384 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 10 00:23:28.975393 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 10 00:23:28.975401 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 10 00:23:28.975411 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 10 00:23:28.975419 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 10 00:23:28.975427 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 10 00:23:28.975435 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 10 00:23:28.975444 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:23:28.975451 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:23:28.975460 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:23:28.975468 kernel: landlock: Up and running. Jul 10 00:23:28.975476 kernel: SELinux: Initializing. Jul 10 00:23:28.975484 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:23:28.975492 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:23:28.975501 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 10 00:23:28.975511 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 10 00:23:28.975519 kernel: signal: max sigframe size: 11952 Jul 10 00:23:28.975528 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:23:28.975537 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:23:28.975545 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:23:28.975554 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:23:28.975562 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:23:28.975570 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:23:28.975579 kernel: .... node #0, CPUs: #1 Jul 10 00:23:28.975589 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:23:28.975598 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jul 10 00:23:28.975607 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 299988K reserved, 0K cma-reserved) Jul 10 00:23:28.975616 kernel: devtmpfs: initialized Jul 10 00:23:28.975624 kernel: x86/mm: Memory block size: 128MB Jul 10 00:23:28.975633 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 10 00:23:28.975641 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:23:28.975650 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:23:28.975658 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:23:28.975668 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:23:28.975706 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:23:28.975715 kernel: audit: type=2000 audit(1752107006.030:1): state=initialized audit_enabled=0 res=1 Jul 10 00:23:28.975723 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:23:28.975732 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:23:28.975740 kernel: cpuidle: using governor menu Jul 10 00:23:28.975749 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:23:28.975758 kernel: dca service started, version 1.12.1 Jul 10 00:23:28.975766 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 10 00:23:28.975776 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 10 00:23:28.975785 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:23:28.975794 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:23:28.975802 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:23:28.975810 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:23:28.975819 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:23:28.975828 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:23:28.975836 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:23:28.975847 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:23:28.975855 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:23:28.975863 kernel: ACPI: Interpreter enabled Jul 10 00:23:28.975872 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:23:28.975880 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:23:28.975889 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:23:28.975897 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 10 00:23:28.975906 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 10 00:23:28.975914 kernel: iommu: Default domain type: Translated Jul 10 00:23:28.975922 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:23:28.975932 kernel: efivars: Registered efivars operations Jul 10 00:23:28.975940 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:23:28.975949 kernel: PCI: System does not support PCI Jul 10 00:23:28.975957 kernel: vgaarb: loaded Jul 10 00:23:28.975965 kernel: clocksource: Switched to clocksource tsc-early Jul 10 00:23:28.975974 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:23:28.975982 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:23:28.975991 kernel: pnp: PnP ACPI init Jul 10 00:23:28.975999 kernel: pnp: PnP ACPI: found 3 devices Jul 10 00:23:28.976009 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:23:28.976018 kernel: NET: Registered PF_INET protocol family Jul 10 00:23:28.976027 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:23:28.976035 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 10 00:23:28.976044 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:23:28.976052 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:23:28.976061 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 10 00:23:28.976070 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 10 00:23:28.976080 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 10 00:23:28.976089 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 10 00:23:28.976097 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:23:28.976105 kernel: NET: Registered PF_XDP protocol family Jul 10 00:23:28.976112 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:23:28.976120 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 10 00:23:28.976127 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Jul 10 00:23:28.976135 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 10 00:23:28.976143 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 10 00:23:28.976153 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 10 00:23:28.976161 kernel: clocksource: Switched to clocksource tsc Jul 10 00:23:28.976169 kernel: Initialise system trusted keyrings Jul 10 00:23:28.976177 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 10 00:23:28.976188 kernel: Key type asymmetric registered Jul 10 00:23:28.976201 kernel: Asymmetric key parser 'x509' registered Jul 10 00:23:28.976209 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:23:28.976217 kernel: io scheduler mq-deadline registered Jul 10 00:23:28.976226 kernel: io scheduler kyber registered Jul 10 00:23:28.976237 kernel: io scheduler bfq registered Jul 10 00:23:28.976245 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:23:28.976256 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:23:28.976264 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:23:28.976273 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 10 00:23:28.976281 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:23:28.976289 kernel: i8042: PNP: No PS/2 controller found. Jul 10 00:23:28.976425 kernel: rtc_cmos 00:02: registered as rtc0 Jul 10 00:23:28.976503 kernel: rtc_cmos 00:02: setting system clock to 2025-07-10T00:23:28 UTC (1752107008) Jul 10 00:23:28.976571 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 10 00:23:28.976581 kernel: intel_pstate: Intel P-state driver initializing Jul 10 00:23:28.976590 kernel: efifb: probing for efifb Jul 10 00:23:28.976598 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 10 00:23:28.976607 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 10 00:23:28.976615 kernel: efifb: scrolling: redraw Jul 10 00:23:28.976624 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:23:28.976632 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 00:23:28.976643 kernel: fb0: EFI VGA frame buffer device Jul 10 00:23:28.976651 kernel: pstore: Using crash dump compression: deflate Jul 10 00:23:28.976660 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:23:28.976668 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:23:28.976697 kernel: Segment Routing with IPv6 Jul 10 00:23:28.976706 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:23:28.976715 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:23:28.976722 kernel: Key type dns_resolver registered Jul 10 00:23:28.976730 kernel: IPI shorthand broadcast: enabled Jul 10 00:23:28.976740 kernel: sched_clock: Marking stable (2987004325, 89711480)->(3391690357, -314974552) Jul 10 00:23:28.976748 kernel: registered taskstats version 1 Jul 10 00:23:28.976755 kernel: Loading compiled-in X.509 certificates Jul 10 00:23:28.976764 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:23:28.976771 kernel: Demotion targets for Node 0: null Jul 10 00:23:28.976779 kernel: Key type .fscrypt registered Jul 10 00:23:28.976786 kernel: Key type fscrypt-provisioning registered Jul 10 00:23:28.976794 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:23:28.976803 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:23:28.976813 kernel: ima: No architecture policies found Jul 10 00:23:28.976822 kernel: clk: Disabling unused clocks Jul 10 00:23:28.976830 kernel: Warning: unable to open an initial console. Jul 10 00:23:28.976838 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:23:28.976847 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:23:28.976855 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:23:28.976864 kernel: Run /init as init process Jul 10 00:23:28.976872 kernel: with arguments: Jul 10 00:23:28.976880 kernel: /init Jul 10 00:23:28.976890 kernel: with environment: Jul 10 00:23:28.976898 kernel: HOME=/ Jul 10 00:23:28.976906 kernel: TERM=linux Jul 10 00:23:28.976914 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:23:28.976924 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:23:28.976937 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:23:28.976947 systemd[1]: Detected virtualization microsoft. Jul 10 00:23:28.979456 systemd[1]: Detected architecture x86-64. Jul 10 00:23:28.979477 systemd[1]: Running in initrd. Jul 10 00:23:28.979488 systemd[1]: No hostname configured, using default hostname. Jul 10 00:23:28.979499 systemd[1]: Hostname set to . Jul 10 00:23:28.979509 systemd[1]: Initializing machine ID from random generator. Jul 10 00:23:28.979519 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:23:28.979529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:23:28.979539 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:23:28.979554 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:23:28.979565 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:23:28.979575 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:23:28.979586 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:23:28.979597 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:23:28.979608 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:23:28.979619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:23:28.979631 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:23:28.979641 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:23:28.979651 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:23:28.979661 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:23:28.979671 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:23:28.979692 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:23:28.979702 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:23:28.979712 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:23:28.979722 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:23:28.979733 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:23:28.979743 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:23:28.979754 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:23:28.979764 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:23:28.979774 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:23:28.979784 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:23:28.979794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:23:28.979805 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:23:28.979817 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:23:28.979827 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:23:28.979837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:23:28.979857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:28.979869 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:23:28.979906 systemd-journald[205]: Collecting audit messages is disabled. Jul 10 00:23:28.979934 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:23:28.979946 systemd-journald[205]: Journal started Jul 10 00:23:28.979971 systemd-journald[205]: Runtime Journal (/run/log/journal/7b6566f8098b41f28186e7f0156a1f3a) is 8M, max 158.9M, 150.9M free. Jul 10 00:23:28.975014 systemd-modules-load[206]: Inserted module 'overlay' Jul 10 00:23:28.984759 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:23:28.988756 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:23:28.993969 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:23:28.999914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:23:29.008409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:29.018775 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:23:29.025520 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:23:29.030793 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:23:29.030816 kernel: Bridge firewalling registered Jul 10 00:23:29.032061 systemd-modules-load[206]: Inserted module 'br_netfilter' Jul 10 00:23:29.033164 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:23:29.035700 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:23:29.036187 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:23:29.037601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:23:29.039237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:23:29.070231 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:23:29.075916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:23:29.078772 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:23:29.088754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:23:29.095086 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:23:29.116272 systemd-resolved[237]: Positive Trust Anchors: Jul 10 00:23:29.118112 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:29.129392 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:23:29.129427 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:23:29.151186 systemd-resolved[237]: Defaulting to hostname 'linux'. Jul 10 00:23:29.154147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:23:29.161812 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:23:29.185695 kernel: SCSI subsystem initialized Jul 10 00:23:29.192689 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:23:29.201701 kernel: iscsi: registered transport (tcp) Jul 10 00:23:29.220068 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:23:29.220109 kernel: QLogic iSCSI HBA Driver Jul 10 00:23:29.233006 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:23:29.250899 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:23:29.258999 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:23:29.287829 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:23:29.290791 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:23:29.330691 kernel: raid6: avx512x4 gen() 46127 MB/s Jul 10 00:23:29.347686 kernel: raid6: avx512x2 gen() 46063 MB/s Jul 10 00:23:29.365684 kernel: raid6: avx512x1 gen() 28444 MB/s Jul 10 00:23:29.383687 kernel: raid6: avx2x4 gen() 37227 MB/s Jul 10 00:23:29.400684 kernel: raid6: avx2x2 gen() 43943 MB/s Jul 10 00:23:29.418257 kernel: raid6: avx2x1 gen() 31617 MB/s Jul 10 00:23:29.418276 kernel: raid6: using algorithm avx512x4 gen() 46127 MB/s Jul 10 00:23:29.436922 kernel: raid6: .... xor() 7628 MB/s, rmw enabled Jul 10 00:23:29.437011 kernel: raid6: using avx512x2 recovery algorithm Jul 10 00:23:29.454695 kernel: xor: automatically using best checksumming function avx Jul 10 00:23:29.566695 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:23:29.571250 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:23:29.575112 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:23:29.600046 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jul 10 00:23:29.604726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:23:29.612266 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:23:29.631201 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 10 00:23:29.648397 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:23:29.651795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:23:29.681565 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:23:29.689775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:23:29.738726 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:23:29.749224 kernel: hv_vmbus: Vmbus version:5.3 Jul 10 00:23:29.749922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:29.750028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:29.760874 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:23:29.760895 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:23:29.759750 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:29.765301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:29.788743 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 10 00:23:29.791652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:29.791929 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:29.799009 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:23:29.801616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:29.809694 kernel: PTP clock support registered Jul 10 00:23:29.809724 kernel: hv_vmbus: registering driver hid_hyperv Jul 10 00:23:29.814541 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 10 00:23:29.814580 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 10 00:23:29.819985 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 10 00:23:29.823740 kernel: hv_vmbus: registering driver hv_storvsc Jul 10 00:23:29.823783 kernel: AES CTR mode by8 optimization enabled Jul 10 00:23:29.825632 kernel: scsi host0: storvsc_host_t Jul 10 00:23:29.827799 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 10 00:23:29.827846 kernel: hv_vmbus: registering driver hv_netvsc Jul 10 00:23:29.835964 kernel: hv_utils: Registering HyperV Utility Driver Jul 10 00:23:29.835997 kernel: hv_vmbus: registering driver hv_utils Jul 10 00:23:29.837406 kernel: hv_vmbus: registering driver hv_pci Jul 10 00:23:29.842697 kernel: hv_utils: Shutdown IC version 3.2 Jul 10 00:23:29.842734 kernel: hv_utils: Heartbeat IC version 3.0 Jul 10 00:23:29.842753 kernel: hv_utils: TimeSync IC version 4.0 Jul 10 00:23:30.347020 systemd-resolved[237]: Clock change detected. Flushing caches. Jul 10 00:23:30.385451 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 10 00:23:30.385561 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 10 00:23:30.385631 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 10 00:23:30.386756 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 00:23:30.386844 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d49692e (unnamed net_device) (uninitialized): VF slot 1 added Jul 10 00:23:30.386922 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 10 00:23:30.391711 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 10 00:23:30.398048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:30.415718 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 10 00:23:30.415755 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 10 00:23:30.415871 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:23:30.416746 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 00:23:30.417763 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 10 00:23:30.418745 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 10 00:23:30.434345 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 10 00:23:30.434537 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 10 00:23:30.434715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#309 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:23:30.454716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#22 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:23:30.704756 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 10 00:23:30.709719 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:30.970721 kernel: nvme nvme0: using unchecked data buffer Jul 10 00:23:31.148463 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 10 00:23:31.161366 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 10 00:23:31.174016 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 10 00:23:31.177683 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:23:31.199886 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 10 00:23:31.200929 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 10 00:23:31.201594 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:23:31.201850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:23:31.201871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:23:31.202808 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:23:31.214848 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:23:31.237633 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:23:31.251929 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:31.257721 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:31.414848 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 10 00:23:31.415039 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 10 00:23:31.417581 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 10 00:23:31.419176 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 00:23:31.423853 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 10 00:23:31.427873 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 10 00:23:31.433410 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 10 00:23:31.433461 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 10 00:23:31.449741 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 00:23:31.449912 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 10 00:23:31.454834 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 10 00:23:31.458964 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 10 00:23:31.468729 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 10 00:23:31.472186 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d49692e eth0: VF registering: eth1 Jul 10 00:23:31.472352 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 10 00:23:31.475720 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 10 00:23:32.265470 disk-uuid[682]: The operation has completed successfully. Jul 10 00:23:32.268938 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:32.318014 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:23:32.318093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:23:32.348469 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:23:32.360772 sh[716]: Success Jul 10 00:23:32.386788 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:23:32.386842 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:23:32.388121 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:23:32.395713 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:23:32.595980 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:23:32.601684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:23:32.611657 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:23:32.653560 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:23:32.653597 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (729) Jul 10 00:23:32.657335 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:23:32.657376 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:32.658803 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:23:32.894521 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:23:32.895967 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:23:32.900356 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:23:32.901088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:23:32.909830 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:23:32.935729 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (762) Jul 10 00:23:32.939999 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:32.940040 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:32.940053 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:32.959733 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:32.960181 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:23:32.972818 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:23:32.989923 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:23:32.993909 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:23:33.023276 systemd-networkd[898]: lo: Link UP Jul 10 00:23:33.023283 systemd-networkd[898]: lo: Gained carrier Jul 10 00:23:33.032017 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 10 00:23:33.032241 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:23:33.032366 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d49692e eth0: Data path switched to VF: enP30832s1 Jul 10 00:23:33.024684 systemd-networkd[898]: Enumeration completed Jul 10 00:23:33.024781 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:23:33.025101 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:33.025104 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:23:33.032988 systemd-networkd[898]: enP30832s1: Link UP Jul 10 00:23:33.033059 systemd-networkd[898]: eth0: Link UP Jul 10 00:23:33.033200 systemd-networkd[898]: eth0: Gained carrier Jul 10 00:23:33.033209 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:33.036813 systemd[1]: Reached target network.target - Network. Jul 10 00:23:33.039361 systemd-networkd[898]: enP30832s1: Gained carrier Jul 10 00:23:33.046737 systemd-networkd[898]: eth0: DHCPv4 address 10.200.8.5/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:23:33.620360 ignition[875]: Ignition 2.21.0 Jul 10 00:23:33.621880 ignition[875]: Stage: fetch-offline Jul 10 00:23:33.624004 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:23:33.622061 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:33.628696 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:23:33.622070 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:33.622208 ignition[875]: parsed url from cmdline: "" Jul 10 00:23:33.622211 ignition[875]: no config URL provided Jul 10 00:23:33.622217 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:23:33.622224 ignition[875]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:23:33.622229 ignition[875]: failed to fetch config: resource requires networking Jul 10 00:23:33.622450 ignition[875]: Ignition finished successfully Jul 10 00:23:33.651884 ignition[909]: Ignition 2.21.0 Jul 10 00:23:33.651896 ignition[909]: Stage: fetch Jul 10 00:23:33.652076 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:33.652084 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:33.652151 ignition[909]: parsed url from cmdline: "" Jul 10 00:23:33.652154 ignition[909]: no config URL provided Jul 10 00:23:33.652158 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:23:33.652163 ignition[909]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:23:33.652197 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 10 00:23:33.704425 ignition[909]: GET result: OK Jul 10 00:23:33.704483 ignition[909]: config has been read from IMDS userdata Jul 10 00:23:33.704507 ignition[909]: parsing config with SHA512: 63888db2172520781b47e5ba8af1e072e8e96ff247ac2620154e566e3140f51af16057f4f324b5e5ed071d07cf520b67d67dac859723ed95f7e44594d716c752 Jul 10 00:23:33.709960 unknown[909]: fetched base config from "system" Jul 10 00:23:33.709967 unknown[909]: fetched base config from "system" Jul 10 00:23:33.710252 ignition[909]: fetch: fetch complete Jul 10 00:23:33.709971 unknown[909]: fetched user config from "azure" Jul 10 00:23:33.710256 ignition[909]: fetch: fetch passed Jul 10 00:23:33.712425 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:23:33.710288 ignition[909]: Ignition finished successfully Jul 10 00:23:33.719335 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:23:33.745293 ignition[915]: Ignition 2.21.0 Jul 10 00:23:33.745299 ignition[915]: Stage: kargs Jul 10 00:23:33.745425 ignition[915]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:33.747985 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:23:33.745446 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:33.752028 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:23:33.745962 ignition[915]: kargs: kargs passed Jul 10 00:23:33.746000 ignition[915]: Ignition finished successfully Jul 10 00:23:33.770529 ignition[921]: Ignition 2.21.0 Jul 10 00:23:33.770538 ignition[921]: Stage: disks Jul 10 00:23:33.772051 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:33.772061 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:33.775676 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:23:33.774909 ignition[921]: disks: disks passed Jul 10 00:23:33.780158 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:23:33.774938 ignition[921]: Ignition finished successfully Jul 10 00:23:33.784540 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:23:33.788106 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:23:33.791184 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:23:33.797953 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:23:33.802439 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:23:33.853620 systemd-fsck[929]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 10 00:23:33.857171 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:23:33.862684 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:23:34.083721 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:23:34.084507 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:23:34.085794 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:23:34.099725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:23:34.104913 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:23:34.108142 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 00:23:34.115789 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:23:34.125881 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (938) Jul 10 00:23:34.125905 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:34.125918 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:34.125928 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:34.115822 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:23:34.119669 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:23:34.132407 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:23:34.136960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:23:34.215801 systemd-networkd[898]: enP30832s1: Gained IPv6LL Jul 10 00:23:34.279990 systemd-networkd[898]: eth0: Gained IPv6LL Jul 10 00:23:34.627418 coreos-metadata[940]: Jul 10 00:23:34.627 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 00:23:34.640075 coreos-metadata[940]: Jul 10 00:23:34.640 INFO Fetch successful Jul 10 00:23:34.641366 coreos-metadata[940]: Jul 10 00:23:34.641 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 10 00:23:34.665110 coreos-metadata[940]: Jul 10 00:23:34.665 INFO Fetch successful Jul 10 00:23:34.677299 coreos-metadata[940]: Jul 10 00:23:34.677 INFO wrote hostname ci-4344.1.1-n-69725f0cc9 to /sysroot/etc/hostname Jul 10 00:23:34.680597 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:23:34.736580 initrd-setup-root[968]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:23:34.764622 initrd-setup-root[975]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:23:34.779188 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:23:34.783869 initrd-setup-root[989]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:23:35.515960 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:23:35.519794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:23:35.525819 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:23:35.533719 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:23:35.536517 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:35.564395 ignition[1056]: INFO : Ignition 2.21.0 Jul 10 00:23:35.564395 ignition[1056]: INFO : Stage: mount Jul 10 00:23:35.569795 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:35.569795 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:35.569795 ignition[1056]: INFO : mount: mount passed Jul 10 00:23:35.569795 ignition[1056]: INFO : Ignition finished successfully Jul 10 00:23:35.567918 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:23:35.572678 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:23:35.586889 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:23:35.606313 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:23:35.610713 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1068) Jul 10 00:23:35.614003 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:35.614038 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:35.615248 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:35.621413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:23:35.642252 ignition[1086]: INFO : Ignition 2.21.0 Jul 10 00:23:35.642252 ignition[1086]: INFO : Stage: files Jul 10 00:23:35.647741 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:35.647741 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:35.647741 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:23:35.655057 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:23:35.655057 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:23:35.681168 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:23:35.685791 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:23:35.685791 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:23:35.681436 unknown[1086]: wrote ssh authorized keys file for user: core Jul 10 00:23:35.719414 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:23:35.722035 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 10 00:23:35.778076 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:23:35.993004 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:23:35.993004 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:23:35.993004 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:23:36.536860 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:23:36.721664 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:23:36.725810 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:23:36.725810 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:23:36.725810 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:23:36.725810 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:23:36.725810 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:23:36.725810 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:23:36.725810 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:23:36.725810 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:23:37.651581 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:23:37.656818 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:23:37.656818 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:23:37.665781 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:23:37.665781 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:23:37.665781 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 10 00:23:38.435653 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:23:39.017290 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:23:39.017290 ignition[1086]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:23:39.032871 ignition[1086]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:23:39.039449 ignition[1086]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:23:39.045838 ignition[1086]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:23:39.045838 ignition[1086]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:23:39.045838 ignition[1086]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:23:39.045838 ignition[1086]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:23:39.045838 ignition[1086]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:23:39.045838 ignition[1086]: INFO : files: files passed Jul 10 00:23:39.045838 ignition[1086]: INFO : Ignition finished successfully Jul 10 00:23:39.044835 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:23:39.051443 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:23:39.070019 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:23:39.072841 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:23:39.072933 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:23:39.087441 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:39.087441 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:39.091836 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:39.094334 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:23:39.100849 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:23:39.101805 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:23:39.145167 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:23:39.145256 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:23:39.150016 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:23:39.151617 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:23:39.156216 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:23:39.158586 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:23:39.176247 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:23:39.179804 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:23:39.196874 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:23:39.197339 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:23:39.197608 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:23:39.203848 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:23:39.203963 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:23:39.207436 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:23:39.207760 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:23:39.208084 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:23:39.208613 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:23:39.209182 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:23:39.209715 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:23:39.209980 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:23:39.210244 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:23:39.210514 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:23:39.210783 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:23:39.211329 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:23:39.211576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:23:39.211679 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:23:39.212238 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:23:39.212796 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:23:39.213415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:23:39.214616 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:23:39.238491 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:23:39.238611 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:23:39.246101 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:23:39.246221 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:23:39.256151 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:23:39.256242 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:23:39.261495 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 00:23:39.261595 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:23:39.265846 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:23:39.271793 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:23:39.315071 ignition[1139]: INFO : Ignition 2.21.0 Jul 10 00:23:39.315071 ignition[1139]: INFO : Stage: umount Jul 10 00:23:39.315071 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:39.315071 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:39.315071 ignition[1139]: INFO : umount: umount passed Jul 10 00:23:39.315071 ignition[1139]: INFO : Ignition finished successfully Jul 10 00:23:39.276211 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:23:39.277194 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:23:39.310428 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:23:39.310547 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:23:39.314100 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:23:39.315921 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:23:39.316006 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:23:39.318834 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:23:39.318914 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:23:39.323788 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:23:39.323878 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:23:39.345069 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:23:39.345154 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:23:39.346696 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:23:39.346749 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:23:39.349645 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:23:39.349678 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:23:39.352904 systemd[1]: Stopped target network.target - Network. Jul 10 00:23:39.355743 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:23:39.355785 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:23:39.357945 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:23:39.358271 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:23:39.358948 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:23:39.363737 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:23:39.363948 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:23:39.363998 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:23:39.364033 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:23:39.364271 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:23:39.364295 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:23:39.364332 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:23:39.434223 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d49692e eth0: Data path switched from VF: enP30832s1 Jul 10 00:23:39.435444 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:23:39.364372 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:23:39.364579 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:23:39.364611 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:23:39.365144 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:23:39.365174 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:23:39.365342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:23:39.365784 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:23:39.378405 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:23:39.378495 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:23:39.384594 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:23:39.384858 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:23:39.384947 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:23:39.389689 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:23:39.390249 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:23:39.391385 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:23:39.391414 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:23:39.392298 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:23:39.392367 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:23:39.392401 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:23:39.392455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:23:39.392479 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:23:39.396123 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:23:39.396165 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:23:39.396662 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:23:39.396693 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:23:39.397814 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:23:39.415240 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:23:39.415286 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:23:39.419102 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:23:39.421317 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:23:39.427666 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:23:39.427694 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:23:39.432780 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:23:39.432814 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:23:39.436772 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:23:39.436815 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:23:39.441010 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:23:39.441049 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:23:39.441344 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:23:39.441377 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:23:39.448385 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:23:39.453712 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:23:39.454437 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:23:39.462030 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:23:39.462078 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:23:39.467318 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:23:39.467362 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:23:39.470719 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:23:39.470757 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:23:39.485736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:39.488592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:39.495139 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:23:39.495188 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 10 00:23:39.495219 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:23:39.495252 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:23:39.495515 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:23:39.495587 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:23:39.497877 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:23:39.497938 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:23:39.504396 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:23:39.508344 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:23:39.542308 systemd[1]: Switching root. Jul 10 00:23:39.611149 systemd-journald[205]: Journal stopped Jul 10 00:23:49.043256 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 10 00:23:49.043297 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:23:49.043311 kernel: SELinux: policy capability open_perms=1 Jul 10 00:23:49.043321 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:23:49.043330 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:23:49.043339 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:23:49.043353 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:23:49.043362 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:23:49.043372 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:23:49.043381 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:23:49.043391 kernel: audit: type=1403 audit(1752107026.634:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:23:49.043402 systemd[1]: Successfully loaded SELinux policy in 89.790ms. Jul 10 00:23:49.043413 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.026ms. Jul 10 00:23:49.043427 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:23:49.043438 systemd[1]: Detected virtualization microsoft. Jul 10 00:23:49.043449 systemd[1]: Detected architecture x86-64. Jul 10 00:23:49.043459 systemd[1]: Detected first boot. Jul 10 00:23:49.043470 systemd[1]: Hostname set to . Jul 10 00:23:49.043482 systemd[1]: Initializing machine ID from random generator. Jul 10 00:23:49.043493 zram_generator::config[1182]: No configuration found. Jul 10 00:23:49.043505 kernel: Guest personality initialized and is inactive Jul 10 00:23:49.043514 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 10 00:23:49.043524 kernel: Initialized host personality Jul 10 00:23:49.043534 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:23:49.043544 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:23:49.043557 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:23:49.043568 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:23:49.043579 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:23:49.043589 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:23:49.043599 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:23:49.043611 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:23:49.043621 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:23:49.043634 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:23:49.043644 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:23:49.043655 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:23:49.043665 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:23:49.043677 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:23:49.043688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:23:49.043713 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:23:49.043724 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:23:49.043738 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:23:49.043751 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:23:49.043763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:23:49.043774 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:23:49.043784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:23:49.043796 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:23:49.043806 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:23:49.043817 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:23:49.043830 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:23:49.043840 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:23:49.043851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:23:49.043862 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:23:49.043872 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:23:49.043882 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:23:49.043892 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:23:49.043902 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:23:49.043915 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:23:49.043925 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:23:49.043935 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:23:49.043945 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:23:49.043954 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:23:49.043967 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:23:49.043977 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:23:49.043988 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:23:49.043999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:49.044010 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:23:49.044020 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:23:49.044031 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:23:49.044043 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:23:49.044056 systemd[1]: Reached target machines.target - Containers. Jul 10 00:23:49.044067 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:23:49.044078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:23:49.044089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:23:49.044100 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:23:49.044111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:23:49.044122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:23:49.044133 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:23:49.044143 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:23:49.044156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:23:49.044167 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:23:49.044178 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:23:49.044189 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:23:49.044201 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:23:49.044212 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:23:49.044224 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:23:49.044235 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:23:49.044248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:23:49.044258 kernel: loop: module loaded Jul 10 00:23:49.044268 kernel: fuse: init (API version 7.41) Jul 10 00:23:49.044279 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:23:49.044290 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:23:49.044301 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:23:49.044312 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:23:49.044323 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:23:49.044336 systemd[1]: Stopped verity-setup.service. Jul 10 00:23:49.044348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:49.044359 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:23:49.044370 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:23:49.044380 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:23:49.044391 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:23:49.044402 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:23:49.044413 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:23:49.044424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:23:49.044438 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:23:49.044448 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:23:49.044459 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:23:49.044470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:23:49.044481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:23:49.044492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:23:49.044502 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:23:49.044514 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:23:49.044526 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:23:49.044537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:23:49.044548 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:23:49.044559 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:23:49.044570 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:23:49.044582 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:23:49.044593 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:23:49.044610 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:23:49.044622 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:23:49.044633 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:23:49.044647 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:23:49.044658 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:23:49.044669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:23:49.044682 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:23:49.044694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:23:49.051771 systemd-journald[1265]: Collecting audit messages is disabled. Jul 10 00:23:49.051805 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:23:49.051818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:23:49.051830 systemd-journald[1265]: Journal started Jul 10 00:23:49.051857 systemd-journald[1265]: Runtime Journal (/run/log/journal/65fadee4d05b45c6b18161b6d2873b55) is 8M, max 158.9M, 150.9M free. Jul 10 00:23:48.492871 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:23:48.501197 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 10 00:23:48.501618 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:23:49.072725 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:23:49.081338 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:23:49.084837 kernel: ACPI: bus type drm_connector registered Jul 10 00:23:49.090744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:23:49.099357 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:23:49.100422 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:23:49.100550 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:23:49.102981 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:23:49.104631 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:23:49.106182 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:23:49.108795 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:23:49.123839 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:23:49.212806 systemd-journald[1265]: Time spent on flushing to /var/log/journal/65fadee4d05b45c6b18161b6d2873b55 is 20.796ms for 992 entries. Jul 10 00:23:49.212806 systemd-journald[1265]: System Journal (/var/log/journal/65fadee4d05b45c6b18161b6d2873b55) is 8M, max 2.6G, 2.6G free. Jul 10 00:23:50.149429 systemd-journald[1265]: Received client request to flush runtime journal. Jul 10 00:23:50.149511 kernel: loop0: detected capacity change from 0 to 113872 Jul 10 00:23:49.310736 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:23:49.557778 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jul 10 00:23:49.557795 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jul 10 00:23:49.562675 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:23:49.597657 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:23:49.602765 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:23:49.706180 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:23:49.709211 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:23:49.713030 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:23:49.717247 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:23:49.722481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:23:49.744896 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 10 00:23:49.744904 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 10 00:23:49.747097 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:23:50.151017 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:23:51.507498 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:23:51.508162 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:23:51.527722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:23:51.541869 kernel: loop1: detected capacity change from 0 to 229808 Jul 10 00:23:51.617716 kernel: loop2: detected capacity change from 0 to 28496 Jul 10 00:23:52.491166 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:23:52.493959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:23:52.525669 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Jul 10 00:23:52.566721 kernel: loop3: detected capacity change from 0 to 146240 Jul 10 00:23:53.009235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:23:53.017564 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:23:53.060338 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:23:53.132845 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:23:53.156724 kernel: hv_vmbus: registering driver hyperv_fb Jul 10 00:23:53.160754 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 10 00:23:53.160805 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 10 00:23:53.167391 kernel: Console: switching to colour dummy device 80x25 Jul 10 00:23:53.175179 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 00:23:53.190765 kernel: hv_vmbus: registering driver hv_balloon Jul 10 00:23:53.200736 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 10 00:23:53.216731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#275 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:23:53.220732 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:23:53.263537 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:23:53.431874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:53.464310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:53.464516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:53.468758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:53.577722 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 10 00:23:53.610615 systemd-networkd[1362]: lo: Link UP Jul 10 00:23:53.610622 systemd-networkd[1362]: lo: Gained carrier Jul 10 00:23:53.612169 systemd-networkd[1362]: Enumeration completed Jul 10 00:23:53.612289 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:23:53.614293 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:53.614302 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:23:53.614650 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:23:53.616065 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:23:53.621730 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 10 00:23:53.624786 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:23:53.627690 systemd-networkd[1362]: enP30832s1: Link UP Jul 10 00:23:53.627775 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d49692e eth0: Data path switched to VF: enP30832s1 Jul 10 00:23:53.628066 systemd-networkd[1362]: eth0: Link UP Jul 10 00:23:53.628073 systemd-networkd[1362]: eth0: Gained carrier Jul 10 00:23:53.628088 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:53.636878 systemd-networkd[1362]: enP30832s1: Gained carrier Jul 10 00:23:53.646724 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.8.5/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:23:53.724060 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:23:53.773593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 10 00:23:53.776501 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:23:53.793716 kernel: loop4: detected capacity change from 0 to 113872 Jul 10 00:23:53.805716 kernel: loop5: detected capacity change from 0 to 229808 Jul 10 00:23:53.817728 kernel: loop6: detected capacity change from 0 to 28496 Jul 10 00:23:53.829756 kernel: loop7: detected capacity change from 0 to 146240 Jul 10 00:23:53.841310 (sd-merge)[1443]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 10 00:23:53.841684 (sd-merge)[1443]: Merged extensions into '/usr'. Jul 10 00:23:53.851087 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:23:53.851099 systemd[1]: Reloading... Jul 10 00:23:53.911742 zram_generator::config[1481]: No configuration found. Jul 10 00:23:53.979326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:23:54.064357 systemd[1]: Reloading finished in 213 ms. Jul 10 00:23:54.081808 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:23:54.083955 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:23:54.096506 systemd[1]: Starting ensure-sysext.service... Jul 10 00:23:54.099829 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:23:54.116885 systemd[1]: Reload requested from client PID 1534 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:23:54.116970 systemd[1]: Reloading... Jul 10 00:23:54.121736 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:23:54.121760 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:23:54.121925 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:23:54.122111 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:23:54.122664 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:23:54.122916 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Jul 10 00:23:54.122965 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Jul 10 00:23:54.126285 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:23:54.126295 systemd-tmpfiles[1535]: Skipping /boot Jul 10 00:23:54.132958 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:23:54.132968 systemd-tmpfiles[1535]: Skipping /boot Jul 10 00:23:54.182719 zram_generator::config[1567]: No configuration found. Jul 10 00:23:54.269493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:23:54.362764 systemd[1]: Reloading finished in 245 ms. Jul 10 00:23:54.375781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:54.388030 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:23:54.396953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:54.399303 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:23:54.412432 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:23:54.416408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:23:54.417518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:23:54.421807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:23:54.425915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:23:54.428198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:23:54.428328 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:23:54.430451 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:23:54.437062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:23:54.441218 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:23:54.444774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:54.447602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:23:54.448558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:23:54.452141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:23:54.452295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:23:54.455180 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:23:54.455391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:23:54.465064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:54.465271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:23:54.467321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:23:54.474790 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:23:54.480404 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:23:54.482344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:23:54.482452 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:23:54.482533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:54.485911 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:23:54.493890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:23:54.494114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:23:54.498562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:23:54.498846 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:23:54.503423 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:23:54.509758 systemd[1]: Finished ensure-sysext.service. Jul 10 00:23:54.512261 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:23:54.513568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:23:54.517975 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:54.518327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:23:54.519200 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:23:54.522881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:23:54.522916 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:23:54.522946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:23:54.522981 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:23:54.523009 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:23:54.525255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:54.529996 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:23:54.530279 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:23:54.566862 systemd-resolved[1636]: Positive Trust Anchors: Jul 10 00:23:54.566870 systemd-resolved[1636]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:23:54.566893 systemd-resolved[1636]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:23:54.569732 systemd-resolved[1636]: Using system hostname 'ci-4344.1.1-n-69725f0cc9'. Jul 10 00:23:54.571167 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:23:54.572517 systemd[1]: Reached target network.target - Network. Jul 10 00:23:54.575789 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:23:54.580188 augenrules[1673]: No rules Jul 10 00:23:54.580640 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:23:54.580822 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:23:54.695819 systemd-networkd[1362]: enP30832s1: Gained IPv6LL Jul 10 00:23:54.924407 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:23:54.926242 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:23:55.335855 systemd-networkd[1362]: eth0: Gained IPv6LL Jul 10 00:23:55.337994 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:23:55.340317 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:23:56.607426 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:23:56.620522 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:23:56.623885 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:23:56.644617 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:23:56.647921 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:23:56.650836 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:23:56.653759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:23:56.656743 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:23:56.659852 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:23:56.661102 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:23:56.664748 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:23:56.666404 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:23:56.666436 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:23:56.668740 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:23:56.683934 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:23:56.687718 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:23:56.692606 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:23:56.694552 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:23:56.696154 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:23:56.699996 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:23:56.703048 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:23:56.706213 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:23:56.709320 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:23:56.711751 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:23:56.713777 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:23:56.713802 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:23:56.715541 systemd[1]: Starting chronyd.service - NTP client/server... Jul 10 00:23:56.718548 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:23:56.723929 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:23:56.728795 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:23:56.732855 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:23:56.736588 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:23:56.743363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:23:56.745815 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:23:56.751541 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:23:56.754790 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 10 00:23:56.755783 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 10 00:23:56.757444 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 10 00:23:56.759665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:23:56.766874 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:23:56.771465 jq[1691]: false Jul 10 00:23:56.772445 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:23:56.775779 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:23:56.779839 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:23:56.785212 KVP[1697]: KVP starting; pid is:1697 Jul 10 00:23:56.787068 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:23:56.795621 KVP[1697]: KVP LIC Version: 3.1 Jul 10 00:23:56.795711 kernel: hv_utils: KVP IC version 4.0 Jul 10 00:23:56.795937 extend-filesystems[1695]: Found /dev/nvme0n1p6 Jul 10 00:23:56.800189 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:23:56.803061 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:23:56.803459 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:23:56.806020 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:23:56.808327 extend-filesystems[1695]: Found /dev/nvme0n1p9 Jul 10 00:23:56.813783 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:23:56.817031 extend-filesystems[1695]: Checking size of /dev/nvme0n1p9 Jul 10 00:23:56.832831 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Refreshing passwd entry cache Jul 10 00:23:56.827730 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:23:56.826570 oslogin_cache_refresh[1696]: Refreshing passwd entry cache Jul 10 00:23:56.830838 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:23:56.834377 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:23:56.838023 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:23:56.838217 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:23:56.841957 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:23:56.842142 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:23:56.842609 (chronyd)[1686]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 10 00:23:56.858009 jq[1713]: true Jul 10 00:23:56.864843 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Failure getting users, quitting Jul 10 00:23:56.864840 oslogin_cache_refresh[1696]: Failure getting users, quitting Jul 10 00:23:56.864926 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:23:56.864926 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Refreshing group entry cache Jul 10 00:23:56.864854 oslogin_cache_refresh[1696]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:23:56.864888 oslogin_cache_refresh[1696]: Refreshing group entry cache Jul 10 00:23:56.870020 chronyd[1741]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 10 00:23:56.876789 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Failure getting groups, quitting Jul 10 00:23:56.876789 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:23:56.876476 oslogin_cache_refresh[1696]: Failure getting groups, quitting Jul 10 00:23:56.876484 oslogin_cache_refresh[1696]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:23:56.877134 (ntainerd)[1742]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:23:56.879552 extend-filesystems[1695]: Old size kept for /dev/nvme0n1p9 Jul 10 00:23:56.882454 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:23:56.882681 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:23:56.884774 chronyd[1741]: Timezone right/UTC failed leap second check, ignoring Jul 10 00:23:56.884937 chronyd[1741]: Loaded seccomp filter (level 2) Jul 10 00:23:56.893474 systemd[1]: Started chronyd.service - NTP client/server. Jul 10 00:23:56.898142 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:23:56.898766 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:23:56.901880 update_engine[1710]: I20250710 00:23:56.901818 1710 main.cc:92] Flatcar Update Engine starting Jul 10 00:23:56.902569 jq[1737]: true Jul 10 00:23:56.902743 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:23:56.920793 tar[1724]: linux-amd64/LICENSE Jul 10 00:23:56.922432 tar[1724]: linux-amd64/helm Jul 10 00:23:56.969204 dbus-daemon[1689]: [system] SELinux support is enabled Jul 10 00:23:56.969308 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:23:56.973555 update_engine[1710]: I20250710 00:23:56.973521 1710 update_check_scheduler.cc:74] Next update check in 11m46s Jul 10 00:23:56.973767 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:23:56.973800 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:23:56.976564 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:23:56.976587 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:23:56.985052 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:23:57.010417 systemd-logind[1709]: New seat seat0. Jul 10 00:23:57.013208 systemd-logind[1709]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 10 00:23:57.015215 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:23:57.018886 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:23:57.021608 bash[1772]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:23:57.022236 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:23:57.027450 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:23:57.081883 coreos-metadata[1688]: Jul 10 00:23:57.080 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 00:23:57.085864 coreos-metadata[1688]: Jul 10 00:23:57.084 INFO Fetch successful Jul 10 00:23:57.085864 coreos-metadata[1688]: Jul 10 00:23:57.084 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 10 00:23:57.089680 coreos-metadata[1688]: Jul 10 00:23:57.089 INFO Fetch successful Jul 10 00:23:57.090420 coreos-metadata[1688]: Jul 10 00:23:57.090 INFO Fetching http://168.63.129.16/machine/63ebdf0b-c8d2-46ba-8936-4c6bddaeeb30/196be826%2Dfb42%2D42f9%2Daa93%2Dc36a46650fb0.%5Fci%2D4344.1.1%2Dn%2D69725f0cc9?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 10 00:23:57.093966 coreos-metadata[1688]: Jul 10 00:23:57.093 INFO Fetch successful Jul 10 00:23:57.094443 coreos-metadata[1688]: Jul 10 00:23:57.094 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 10 00:23:57.102670 coreos-metadata[1688]: Jul 10 00:23:57.102 INFO Fetch successful Jul 10 00:23:57.168121 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:23:57.190362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:23:57.295444 sshd_keygen[1746]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:23:57.301479 locksmithd[1775]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:23:57.349253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:23:57.353981 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:23:57.358434 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 10 00:23:57.387465 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:23:57.391863 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:23:57.397610 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:23:57.400730 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 10 00:23:57.422013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:23:57.425933 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:23:57.431529 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:23:57.434425 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:23:57.663532 containerd[1742]: time="2025-07-10T00:23:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:23:57.665886 containerd[1742]: time="2025-07-10T00:23:57.665827442Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681006226Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.342µs" Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681036351Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681058130Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681179210Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681192236Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681213066Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681259149Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681269205Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681477760Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681487611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681498091Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682172 containerd[1742]: time="2025-07-10T00:23:57.681506362Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682438 containerd[1742]: time="2025-07-10T00:23:57.681561882Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682438 containerd[1742]: time="2025-07-10T00:23:57.681730308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682438 containerd[1742]: time="2025-07-10T00:23:57.681751094Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:23:57.682438 containerd[1742]: time="2025-07-10T00:23:57.681761149Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:23:57.682438 containerd[1742]: time="2025-07-10T00:23:57.681792096Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:23:57.682438 containerd[1742]: time="2025-07-10T00:23:57.681995926Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:23:57.682438 containerd[1742]: time="2025-07-10T00:23:57.682037466Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:23:57.698514 containerd[1742]: time="2025-07-10T00:23:57.698486565Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.699873174Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.699901034Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.699949977Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.699961944Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.699971597Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700006424Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700019238Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700030617Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700040830Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700050069Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700071337Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700176368Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700202042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:23:57.700895 containerd[1742]: time="2025-07-10T00:23:57.700227316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700237682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700246964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700255813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700265667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700274382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700284868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700303047Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700313760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700378673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700391606Z" level=info msg="Start snapshots syncer" Jul 10 00:23:57.701568 containerd[1742]: time="2025-07-10T00:23:57.700502399Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:23:57.701802 containerd[1742]: time="2025-07-10T00:23:57.701344499Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:23:57.701802 containerd[1742]: time="2025-07-10T00:23:57.701400660Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:23:57.702165 containerd[1742]: time="2025-07-10T00:23:57.702139553Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702846286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702884048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702896546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702907628Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702921117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702930522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702950083Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702973467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702983564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:23:57.703024 containerd[1742]: time="2025-07-10T00:23:57.702993480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:23:57.703413 containerd[1742]: time="2025-07-10T00:23:57.703292470Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:23:57.703413 containerd[1742]: time="2025-07-10T00:23:57.703310948Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:23:57.703413 containerd[1742]: time="2025-07-10T00:23:57.703320480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:23:57.703413 containerd[1742]: time="2025-07-10T00:23:57.703366273Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:23:57.703413 containerd[1742]: time="2025-07-10T00:23:57.703374231Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:23:57.703413 containerd[1742]: time="2025-07-10T00:23:57.703382722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:23:57.703413 containerd[1742]: time="2025-07-10T00:23:57.703392118Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:23:57.704979 containerd[1742]: time="2025-07-10T00:23:57.703564273Z" level=info msg="runtime interface created" Jul 10 00:23:57.704979 containerd[1742]: time="2025-07-10T00:23:57.703570878Z" level=info msg="created NRI interface" Jul 10 00:23:57.704979 containerd[1742]: time="2025-07-10T00:23:57.703578175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:23:57.704979 containerd[1742]: time="2025-07-10T00:23:57.703589448Z" level=info msg="Connect containerd service" Jul 10 00:23:57.704979 containerd[1742]: time="2025-07-10T00:23:57.703615473Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:23:57.705940 containerd[1742]: time="2025-07-10T00:23:57.705918267Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:23:57.723379 tar[1724]: linux-amd64/README.md Jul 10 00:23:57.736799 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:23:58.175795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:23:58.185992 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:23:58.202761 containerd[1742]: time="2025-07-10T00:23:58.202599188Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:23:58.202761 containerd[1742]: time="2025-07-10T00:23:58.202661048Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:23:58.202761 containerd[1742]: time="2025-07-10T00:23:58.202687441Z" level=info msg="Start subscribing containerd event" Jul 10 00:23:58.202957 containerd[1742]: time="2025-07-10T00:23:58.202929870Z" level=info msg="Start recovering state" Jul 10 00:23:58.203065 containerd[1742]: time="2025-07-10T00:23:58.203057137Z" level=info msg="Start event monitor" Jul 10 00:23:58.203102 containerd[1742]: time="2025-07-10T00:23:58.203095624Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:23:58.203137 containerd[1742]: time="2025-07-10T00:23:58.203131157Z" level=info msg="Start streaming server" Jul 10 00:23:58.203179 containerd[1742]: time="2025-07-10T00:23:58.203172537Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:23:58.203337 containerd[1742]: time="2025-07-10T00:23:58.203220561Z" level=info msg="runtime interface starting up..." Jul 10 00:23:58.203337 containerd[1742]: time="2025-07-10T00:23:58.203229376Z" level=info msg="starting plugins..." Jul 10 00:23:58.203337 containerd[1742]: time="2025-07-10T00:23:58.203241394Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:23:58.204111 containerd[1742]: time="2025-07-10T00:23:58.203437252Z" level=info msg="containerd successfully booted in 0.540240s" Jul 10 00:23:58.203510 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:23:58.206021 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:23:58.208442 systemd[1]: Startup finished in 3.127s (kernel) + 17.324s (initrd) + 11.661s (userspace) = 32.113s. Jul 10 00:23:58.391781 login[1835]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:23:58.394389 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:23:58.402336 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:23:58.403229 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:23:58.413530 systemd-logind[1709]: New session 2 of user core. Jul 10 00:23:58.416926 systemd-logind[1709]: New session 1 of user core. Jul 10 00:23:58.434746 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:23:58.438513 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:23:58.451457 (systemd)[1872]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:23:58.454418 systemd-logind[1709]: New session c1 of user core. Jul 10 00:23:58.598009 waagent[1832]: 2025-07-10T00:23:58.597947Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.598625Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.599457Z INFO Daemon Daemon Python: 3.11.12 Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.599663Z INFO Daemon Daemon Run daemon Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.601219Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.601401Z INFO Daemon Daemon Using waagent for provisioning Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.601568Z INFO Daemon Daemon Activate resource disk Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.603047Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.604866Z INFO Daemon Daemon Found device: None Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.605394Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.606489Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.607298Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 00:23:58.626264 waagent[1832]: 2025-07-10T00:23:58.607571Z INFO Daemon Daemon Running default provisioning handler Jul 10 00:23:58.633993 waagent[1832]: 2025-07-10T00:23:58.633947Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 10 00:23:58.641966 waagent[1832]: 2025-07-10T00:23:58.635947Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 10 00:23:58.641966 waagent[1832]: 2025-07-10T00:23:58.636142Z INFO Daemon Daemon cloud-init is enabled: False Jul 10 00:23:58.641966 waagent[1832]: 2025-07-10T00:23:58.636462Z INFO Daemon Daemon Copying ovf-env.xml Jul 10 00:23:58.678252 systemd[1872]: Queued start job for default target default.target. Jul 10 00:23:58.687720 systemd[1872]: Created slice app.slice - User Application Slice. Jul 10 00:23:58.687750 systemd[1872]: Reached target paths.target - Paths. Jul 10 00:23:58.687830 systemd[1872]: Reached target timers.target - Timers. Jul 10 00:23:58.688788 systemd[1872]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:23:58.695945 waagent[1832]: 2025-07-10T00:23:58.693822Z INFO Daemon Daemon Successfully mounted dvd Jul 10 00:23:58.709561 systemd[1872]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:23:58.709780 systemd[1872]: Reached target sockets.target - Sockets. Jul 10 00:23:58.709861 systemd[1872]: Reached target basic.target - Basic System. Jul 10 00:23:58.709892 systemd[1872]: Reached target default.target - Main User Target. Jul 10 00:23:58.709915 systemd[1872]: Startup finished in 246ms. Jul 10 00:23:58.709939 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:23:58.715838 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:23:58.716555 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:23:58.721032 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 10 00:23:58.726183 waagent[1832]: 2025-07-10T00:23:58.725829Z INFO Daemon Daemon Detect protocol endpoint Jul 10 00:23:58.730969 waagent[1832]: 2025-07-10T00:23:58.728983Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 00:23:58.732024 waagent[1832]: 2025-07-10T00:23:58.731332Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 10 00:23:58.734811 waagent[1832]: 2025-07-10T00:23:58.734771Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 10 00:23:58.737162 waagent[1832]: 2025-07-10T00:23:58.737129Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 10 00:23:58.740763 waagent[1832]: 2025-07-10T00:23:58.737625Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 10 00:23:58.753235 waagent[1832]: 2025-07-10T00:23:58.751607Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 10 00:23:58.753235 waagent[1832]: 2025-07-10T00:23:58.752312Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 10 00:23:58.753235 waagent[1832]: 2025-07-10T00:23:58.752885Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 10 00:23:58.837293 waagent[1832]: 2025-07-10T00:23:58.836668Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 10 00:23:58.839251 waagent[1832]: 2025-07-10T00:23:58.839011Z INFO Daemon Daemon Forcing an update of the goal state. Jul 10 00:23:58.849797 waagent[1832]: 2025-07-10T00:23:58.849751Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 00:23:58.867732 waagent[1832]: 2025-07-10T00:23:58.864542Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 10 00:23:58.867732 waagent[1832]: 2025-07-10T00:23:58.865395Z INFO Daemon Jul 10 00:23:58.867732 waagent[1832]: 2025-07-10T00:23:58.865471Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bca254bc-d882-4e56-acc5-80059593a633 eTag: 17699072902378993942 source: Fabric] Jul 10 00:23:58.867732 waagent[1832]: 2025-07-10T00:23:58.866190Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 10 00:23:58.867732 waagent[1832]: 2025-07-10T00:23:58.866570Z INFO Daemon Jul 10 00:23:58.867732 waagent[1832]: 2025-07-10T00:23:58.866802Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 10 00:23:58.875718 waagent[1832]: 2025-07-10T00:23:58.874421Z INFO Daemon Daemon Downloading artifacts profile blob Jul 10 00:23:58.951377 kubelet[1859]: E0710 00:23:58.951305 1859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:23:58.954082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:23:58.954211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:23:58.954742 systemd[1]: kubelet.service: Consumed 940ms CPU time, 267.9M memory peak. Jul 10 00:23:58.961146 waagent[1832]: 2025-07-10T00:23:58.961105Z INFO Daemon Downloaded certificate {'thumbprint': '2312190204276DC7BD318EC1A683F4DC029C6ED2', 'hasPrivateKey': True} Jul 10 00:23:58.964527 waagent[1832]: 2025-07-10T00:23:58.962007Z INFO Daemon Fetch goal state completed Jul 10 00:23:58.968446 waagent[1832]: 2025-07-10T00:23:58.968408Z INFO Daemon Daemon Starting provisioning Jul 10 00:23:58.970936 waagent[1832]: 2025-07-10T00:23:58.968921Z INFO Daemon Daemon Handle ovf-env.xml. Jul 10 00:23:58.970936 waagent[1832]: 2025-07-10T00:23:58.969216Z INFO Daemon Daemon Set hostname [ci-4344.1.1-n-69725f0cc9] Jul 10 00:23:58.991513 waagent[1832]: 2025-07-10T00:23:58.991473Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-n-69725f0cc9] Jul 10 00:23:58.997283 waagent[1832]: 2025-07-10T00:23:58.992101Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 10 00:23:58.997283 waagent[1832]: 2025-07-10T00:23:58.992343Z INFO Daemon Daemon Primary interface is [eth0] Jul 10 00:23:58.999366 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:58.999373 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:23:58.999395 systemd-networkd[1362]: eth0: DHCP lease lost Jul 10 00:23:59.000197 waagent[1832]: 2025-07-10T00:23:59.000154Z INFO Daemon Daemon Create user account if not exists Jul 10 00:23:59.001607 waagent[1832]: 2025-07-10T00:23:59.001522Z INFO Daemon Daemon User core already exists, skip useradd Jul 10 00:23:59.002817 waagent[1832]: 2025-07-10T00:23:59.001752Z INFO Daemon Daemon Configure sudoer Jul 10 00:23:59.007925 waagent[1832]: 2025-07-10T00:23:59.007886Z INFO Daemon Daemon Configure sshd Jul 10 00:23:59.016321 waagent[1832]: 2025-07-10T00:23:59.016280Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 10 00:23:59.018160 waagent[1832]: 2025-07-10T00:23:59.016746Z INFO Daemon Daemon Deploy ssh public key. Jul 10 00:23:59.020184 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.8.5/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:24:00.087422 waagent[1832]: 2025-07-10T00:24:00.087357Z INFO Daemon Daemon Provisioning complete Jul 10 00:24:00.099033 waagent[1832]: 2025-07-10T00:24:00.099003Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 10 00:24:00.099673 waagent[1832]: 2025-07-10T00:24:00.099436Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 10 00:24:00.103835 waagent[1832]: 2025-07-10T00:24:00.099738Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 10 00:24:00.198327 waagent[1923]: 2025-07-10T00:24:00.198256Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 10 00:24:00.198661 waagent[1923]: 2025-07-10T00:24:00.198361Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 10 00:24:00.198661 waagent[1923]: 2025-07-10T00:24:00.198399Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 10 00:24:00.198661 waagent[1923]: 2025-07-10T00:24:00.198433Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 10 00:24:00.229852 waagent[1923]: 2025-07-10T00:24:00.229799Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 10 00:24:00.229977 waagent[1923]: 2025-07-10T00:24:00.229952Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:24:00.230024 waagent[1923]: 2025-07-10T00:24:00.230006Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:24:00.237483 waagent[1923]: 2025-07-10T00:24:00.237435Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 00:24:00.241882 waagent[1923]: 2025-07-10T00:24:00.241851Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 10 00:24:00.242206 waagent[1923]: 2025-07-10T00:24:00.242181Z INFO ExtHandler Jul 10 00:24:00.242251 waagent[1923]: 2025-07-10T00:24:00.242234Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b2e80e7a-077d-4acc-aa3e-5d9caa716019 eTag: 17699072902378993942 source: Fabric] Jul 10 00:24:00.242438 waagent[1923]: 2025-07-10T00:24:00.242418Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 10 00:24:00.242798 waagent[1923]: 2025-07-10T00:24:00.242770Z INFO ExtHandler Jul 10 00:24:00.242837 waagent[1923]: 2025-07-10T00:24:00.242818Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 10 00:24:00.245248 waagent[1923]: 2025-07-10T00:24:00.245222Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 10 00:24:00.334312 waagent[1923]: 2025-07-10T00:24:00.334254Z INFO ExtHandler Downloaded certificate {'thumbprint': '2312190204276DC7BD318EC1A683F4DC029C6ED2', 'hasPrivateKey': True} Jul 10 00:24:00.334670 waagent[1923]: 2025-07-10T00:24:00.334641Z INFO ExtHandler Fetch goal state completed Jul 10 00:24:00.344662 waagent[1923]: 2025-07-10T00:24:00.344581Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 10 00:24:00.348917 waagent[1923]: 2025-07-10T00:24:00.348871Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1923 Jul 10 00:24:00.349031 waagent[1923]: 2025-07-10T00:24:00.349008Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 10 00:24:00.349268 waagent[1923]: 2025-07-10T00:24:00.349245Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 10 00:24:00.350277 waagent[1923]: 2025-07-10T00:24:00.350248Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 10 00:24:00.350556 waagent[1923]: 2025-07-10T00:24:00.350531Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 10 00:24:00.350670 waagent[1923]: 2025-07-10T00:24:00.350650Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 10 00:24:00.351089 waagent[1923]: 2025-07-10T00:24:00.351062Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 10 00:24:00.378058 waagent[1923]: 2025-07-10T00:24:00.378033Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 10 00:24:00.378201 waagent[1923]: 2025-07-10T00:24:00.378180Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 10 00:24:00.383675 waagent[1923]: 2025-07-10T00:24:00.383506Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 10 00:24:00.388673 systemd[1]: Reload requested from client PID 1938 ('systemctl') (unit waagent.service)... Jul 10 00:24:00.388684 systemd[1]: Reloading... Jul 10 00:24:00.468720 zram_generator::config[1975]: No configuration found. Jul 10 00:24:00.542056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:24:00.638618 systemd[1]: Reloading finished in 249 ms. Jul 10 00:24:00.655363 waagent[1923]: 2025-07-10T00:24:00.655289Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 10 00:24:00.655466 waagent[1923]: 2025-07-10T00:24:00.655442Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 10 00:24:00.779316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jul 10 00:24:00.963581 waagent[1923]: 2025-07-10T00:24:00.963465Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 10 00:24:00.963840 waagent[1923]: 2025-07-10T00:24:00.963811Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 10 00:24:00.964495 waagent[1923]: 2025-07-10T00:24:00.964460Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 10 00:24:00.965059 waagent[1923]: 2025-07-10T00:24:00.965019Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 10 00:24:00.965210 waagent[1923]: 2025-07-10T00:24:00.965072Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:24:00.965210 waagent[1923]: 2025-07-10T00:24:00.965114Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:24:00.965267 waagent[1923]: 2025-07-10T00:24:00.965229Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:24:00.965373 waagent[1923]: 2025-07-10T00:24:00.965351Z INFO EnvHandler ExtHandler Configure routes Jul 10 00:24:00.965523 waagent[1923]: 2025-07-10T00:24:00.965503Z INFO EnvHandler ExtHandler Gateway:None Jul 10 00:24:00.965566 waagent[1923]: 2025-07-10T00:24:00.965549Z INFO EnvHandler ExtHandler Routes:None Jul 10 00:24:00.966015 waagent[1923]: 2025-07-10T00:24:00.965986Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 10 00:24:00.966082 waagent[1923]: 2025-07-10T00:24:00.966055Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:24:00.966239 waagent[1923]: 2025-07-10T00:24:00.966218Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 10 00:24:00.966381 waagent[1923]: 2025-07-10T00:24:00.966358Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 10 00:24:00.966381 waagent[1923]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 10 00:24:00.966381 waagent[1923]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 10 00:24:00.966381 waagent[1923]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 10 00:24:00.966381 waagent[1923]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:24:00.966381 waagent[1923]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:24:00.966381 waagent[1923]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:24:00.966741 waagent[1923]: 2025-07-10T00:24:00.966685Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 10 00:24:00.967074 waagent[1923]: 2025-07-10T00:24:00.967024Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 10 00:24:00.967174 waagent[1923]: 2025-07-10T00:24:00.967145Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 10 00:24:00.967321 waagent[1923]: 2025-07-10T00:24:00.967290Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 10 00:24:00.971964 waagent[1923]: 2025-07-10T00:24:00.971885Z INFO ExtHandler ExtHandler Jul 10 00:24:00.972027 waagent[1923]: 2025-07-10T00:24:00.972003Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 935eb533-92ba-48da-872e-3bab3dba64c2 correlation 3502a2d3-5723-4abb-9063-452709ad7c90 created: 2025-07-10T00:23:02.248909Z] Jul 10 00:24:00.972273 waagent[1923]: 2025-07-10T00:24:00.972251Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 10 00:24:00.972655 waagent[1923]: 2025-07-10T00:24:00.972633Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 10 00:24:00.996389 waagent[1923]: 2025-07-10T00:24:00.996344Z INFO MonitorHandler ExtHandler Network interfaces: Jul 10 00:24:00.996389 waagent[1923]: Executing ['ip', '-a', '-o', 'link']: Jul 10 00:24:00.996389 waagent[1923]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 10 00:24:00.996389 waagent[1923]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:49:69:2e brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 10 00:24:00.996389 waagent[1923]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:49:69:2e brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 10 00:24:00.996389 waagent[1923]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 10 00:24:00.996389 waagent[1923]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 10 00:24:00.996389 waagent[1923]: 2: eth0 inet 10.200.8.5/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 10 00:24:00.996389 waagent[1923]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 10 00:24:00.996389 waagent[1923]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 10 00:24:00.996389 waagent[1923]: 2: eth0 inet6 fe80::7eed:8dff:fe49:692e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 00:24:00.996389 waagent[1923]: 3: enP30832s1 inet6 fe80::7eed:8dff:fe49:692e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 00:24:01.020180 waagent[1923]: 2025-07-10T00:24:01.019627Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 10 00:24:01.020180 waagent[1923]: Try `iptables -h' or 'iptables --help' for more information.) Jul 10 00:24:01.020180 waagent[1923]: 2025-07-10T00:24:01.020043Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 606B67C6-A451-4F01-BF58-8AD15472E5AD;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 10 00:24:01.035430 waagent[1923]: 2025-07-10T00:24:01.035384Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 10 00:24:01.035430 waagent[1923]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:01.035430 waagent[1923]: pkts bytes target prot opt in out source destination Jul 10 00:24:01.035430 waagent[1923]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:01.035430 waagent[1923]: pkts bytes target prot opt in out source destination Jul 10 00:24:01.035430 waagent[1923]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jul 10 00:24:01.035430 waagent[1923]: pkts bytes target prot opt in out source destination Jul 10 00:24:01.035430 waagent[1923]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 00:24:01.035430 waagent[1923]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 00:24:01.035430 waagent[1923]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 00:24:01.038020 waagent[1923]: 2025-07-10T00:24:01.037977Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 10 00:24:01.038020 waagent[1923]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:01.038020 waagent[1923]: pkts bytes target prot opt in out source destination Jul 10 00:24:01.038020 waagent[1923]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:24:01.038020 waagent[1923]: pkts bytes target prot opt in out source destination Jul 10 00:24:01.038020 waagent[1923]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jul 10 00:24:01.038020 waagent[1923]: pkts bytes target prot opt in out source destination Jul 10 00:24:01.038020 waagent[1923]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 00:24:01.038020 waagent[1923]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 00:24:01.038020 waagent[1923]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 00:24:09.139290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:24:09.141038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:15.731796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:15.741910 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:15.774398 kubelet[2074]: E0710 00:24:15.774368 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:15.777323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:15.777454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:15.777793 systemd[1]: kubelet.service: Consumed 132ms CPU time, 108.5M memory peak. Jul 10 00:24:19.038665 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:24:19.039755 systemd[1]: Started sshd@0-10.200.8.5:22-10.200.16.10:33144.service - OpenSSH per-connection server daemon (10.200.16.10:33144). Jul 10 00:24:19.743692 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 33144 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:19.745029 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:19.750357 systemd-logind[1709]: New session 3 of user core. Jul 10 00:24:19.756828 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:24:20.298163 systemd[1]: Started sshd@1-10.200.8.5:22-10.200.16.10:41434.service - OpenSSH per-connection server daemon (10.200.16.10:41434). Jul 10 00:24:20.668255 chronyd[1741]: Selected source PHC0 Jul 10 00:24:20.928320 sshd[2087]: Accepted publickey for core from 10.200.16.10 port 41434 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:20.929617 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:20.934286 systemd-logind[1709]: New session 4 of user core. Jul 10 00:24:20.939846 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:24:21.369581 sshd[2089]: Connection closed by 10.200.16.10 port 41434 Jul 10 00:24:21.370398 sshd-session[2087]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:21.373228 systemd[1]: sshd@1-10.200.8.5:22-10.200.16.10:41434.service: Deactivated successfully. Jul 10 00:24:21.374959 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:24:21.376494 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:24:21.377244 systemd-logind[1709]: Removed session 4. Jul 10 00:24:21.484058 systemd[1]: Started sshd@2-10.200.8.5:22-10.200.16.10:41446.service - OpenSSH per-connection server daemon (10.200.16.10:41446). Jul 10 00:24:22.112162 sshd[2095]: Accepted publickey for core from 10.200.16.10 port 41446 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:22.113505 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:22.118378 systemd-logind[1709]: New session 5 of user core. Jul 10 00:24:22.124859 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:24:22.551289 sshd[2097]: Connection closed by 10.200.16.10 port 41446 Jul 10 00:24:22.551881 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:22.555376 systemd[1]: sshd@2-10.200.8.5:22-10.200.16.10:41446.service: Deactivated successfully. Jul 10 00:24:22.556961 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:24:22.557628 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:24:22.558818 systemd-logind[1709]: Removed session 5. Jul 10 00:24:22.662972 systemd[1]: Started sshd@3-10.200.8.5:22-10.200.16.10:41452.service - OpenSSH per-connection server daemon (10.200.16.10:41452). Jul 10 00:24:23.291677 sshd[2103]: Accepted publickey for core from 10.200.16.10 port 41452 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:23.292972 sshd-session[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:23.297460 systemd-logind[1709]: New session 6 of user core. Jul 10 00:24:23.303843 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:24:23.735459 sshd[2105]: Connection closed by 10.200.16.10 port 41452 Jul 10 00:24:23.736049 sshd-session[2103]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:23.739429 systemd[1]: sshd@3-10.200.8.5:22-10.200.16.10:41452.service: Deactivated successfully. Jul 10 00:24:23.740901 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:24:23.741538 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:24:23.742676 systemd-logind[1709]: Removed session 6. Jul 10 00:24:23.850816 systemd[1]: Started sshd@4-10.200.8.5:22-10.200.16.10:41454.service - OpenSSH per-connection server daemon (10.200.16.10:41454). Jul 10 00:24:24.478073 sshd[2111]: Accepted publickey for core from 10.200.16.10 port 41454 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:24.479383 sshd-session[2111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:24.483912 systemd-logind[1709]: New session 7 of user core. Jul 10 00:24:24.489836 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:24:24.894951 sudo[2114]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:24:24.895173 sudo[2114]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:24.908614 sudo[2114]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:25.010834 sshd[2113]: Connection closed by 10.200.16.10 port 41454 Jul 10 00:24:25.011588 sshd-session[2111]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:25.014740 systemd[1]: sshd@4-10.200.8.5:22-10.200.16.10:41454.service: Deactivated successfully. Jul 10 00:24:25.016262 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:24:25.017526 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:24:25.018858 systemd-logind[1709]: Removed session 7. Jul 10 00:24:25.121022 systemd[1]: Started sshd@5-10.200.8.5:22-10.200.16.10:41466.service - OpenSSH per-connection server daemon (10.200.16.10:41466). Jul 10 00:24:25.749046 sshd[2120]: Accepted publickey for core from 10.200.16.10 port 41466 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:25.750925 sshd-session[2120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:25.755369 systemd-logind[1709]: New session 8 of user core. Jul 10 00:24:25.757854 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:24:25.889164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:24:25.890810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:26.091670 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:24:26.091891 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:26.153979 sudo[2127]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:26.159117 sudo[2126]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:24:26.159338 sudo[2126]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:26.171991 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:24:26.323817 augenrules[2149]: No rules Jul 10 00:24:26.325102 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:24:26.325961 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:24:26.328268 sudo[2126]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:26.347347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:26.353932 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:26.391560 kubelet[2159]: E0710 00:24:26.391521 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:26.393222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:26.393340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:26.393598 systemd[1]: kubelet.service: Consumed 129ms CPU time, 111M memory peak. Jul 10 00:24:26.429041 sshd[2122]: Connection closed by 10.200.16.10 port 41466 Jul 10 00:24:26.429408 sshd-session[2120]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:26.431780 systemd[1]: sshd@5-10.200.8.5:22-10.200.16.10:41466.service: Deactivated successfully. Jul 10 00:24:26.432916 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:24:26.434290 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:24:26.434966 systemd-logind[1709]: Removed session 8. Jul 10 00:24:26.540420 systemd[1]: Started sshd@6-10.200.8.5:22-10.200.16.10:41472.service - OpenSSH per-connection server daemon (10.200.16.10:41472). Jul 10 00:24:27.185415 sshd[2170]: Accepted publickey for core from 10.200.16.10 port 41472 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:27.186672 sshd-session[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:27.191508 systemd-logind[1709]: New session 9 of user core. Jul 10 00:24:27.200843 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:24:27.528775 sudo[2173]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:24:27.529136 sudo[2173]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:28.530879 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:24:28.541001 (dockerd)[2191]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:24:29.020950 dockerd[2191]: time="2025-07-10T00:24:29.020907829Z" level=info msg="Starting up" Jul 10 00:24:29.021599 dockerd[2191]: time="2025-07-10T00:24:29.021568348Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:24:29.131403 dockerd[2191]: time="2025-07-10T00:24:29.131373895Z" level=info msg="Loading containers: start." Jul 10 00:24:29.159740 kernel: Initializing XFRM netlink socket Jul 10 00:24:29.361557 systemd-networkd[1362]: docker0: Link UP Jul 10 00:24:29.378536 dockerd[2191]: time="2025-07-10T00:24:29.378502607Z" level=info msg="Loading containers: done." Jul 10 00:24:29.391756 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck852188997-merged.mount: Deactivated successfully. Jul 10 00:24:29.401522 dockerd[2191]: time="2025-07-10T00:24:29.401488694Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:24:29.401601 dockerd[2191]: time="2025-07-10T00:24:29.401559639Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:24:29.401671 dockerd[2191]: time="2025-07-10T00:24:29.401657145Z" level=info msg="Initializing buildkit" Jul 10 00:24:29.453542 dockerd[2191]: time="2025-07-10T00:24:29.453509073Z" level=info msg="Completed buildkit initialization" Jul 10 00:24:29.460798 dockerd[2191]: time="2025-07-10T00:24:29.460760649Z" level=info msg="Daemon has completed initialization" Jul 10 00:24:29.460881 dockerd[2191]: time="2025-07-10T00:24:29.460815360Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:24:29.461085 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:24:30.258812 containerd[1742]: time="2025-07-10T00:24:30.258771178Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 00:24:30.928660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752636326.mount: Deactivated successfully. Jul 10 00:24:32.079264 containerd[1742]: time="2025-07-10T00:24:32.079222282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:32.081853 containerd[1742]: time="2025-07-10T00:24:32.081820365Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jul 10 00:24:32.084436 containerd[1742]: time="2025-07-10T00:24:32.084400720Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:32.087999 containerd[1742]: time="2025-07-10T00:24:32.087949038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:32.088659 containerd[1742]: time="2025-07-10T00:24:32.088517490Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.829711606s" Jul 10 00:24:32.088659 containerd[1742]: time="2025-07-10T00:24:32.088548959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 10 00:24:32.089206 containerd[1742]: time="2025-07-10T00:24:32.089165672Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 00:24:33.490843 containerd[1742]: time="2025-07-10T00:24:33.490791123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:33.493109 containerd[1742]: time="2025-07-10T00:24:33.493071403Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jul 10 00:24:33.495468 containerd[1742]: time="2025-07-10T00:24:33.495428492Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:33.500644 containerd[1742]: time="2025-07-10T00:24:33.500592826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:33.501327 containerd[1742]: time="2025-07-10T00:24:33.501185878Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.411991098s" Jul 10 00:24:33.501327 containerd[1742]: time="2025-07-10T00:24:33.501218243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 10 00:24:33.501847 containerd[1742]: time="2025-07-10T00:24:33.501743988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 00:24:34.750814 containerd[1742]: time="2025-07-10T00:24:34.750768899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:34.752825 containerd[1742]: time="2025-07-10T00:24:34.752790744Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jul 10 00:24:34.755195 containerd[1742]: time="2025-07-10T00:24:34.755158777Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:34.758785 containerd[1742]: time="2025-07-10T00:24:34.758756002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:34.759437 containerd[1742]: time="2025-07-10T00:24:34.759414250Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.257647343s" Jul 10 00:24:34.759477 containerd[1742]: time="2025-07-10T00:24:34.759456288Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 10 00:24:34.759925 containerd[1742]: time="2025-07-10T00:24:34.759906549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:24:35.669773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160786909.mount: Deactivated successfully. Jul 10 00:24:36.020548 containerd[1742]: time="2025-07-10T00:24:36.020445105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:36.022755 containerd[1742]: time="2025-07-10T00:24:36.022720220Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jul 10 00:24:36.025343 containerd[1742]: time="2025-07-10T00:24:36.025316568Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:36.028709 containerd[1742]: time="2025-07-10T00:24:36.028669678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:36.029021 containerd[1742]: time="2025-07-10T00:24:36.028948308Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.269017304s" Jul 10 00:24:36.029021 containerd[1742]: time="2025-07-10T00:24:36.029016112Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 10 00:24:36.029488 containerd[1742]: time="2025-07-10T00:24:36.029449278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 00:24:36.547278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:24:36.548563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:36.573110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087586308.mount: Deactivated successfully. Jul 10 00:24:37.085889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:37.095918 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:37.129262 kubelet[2472]: E0710 00:24:37.129205 2472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:37.130793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:37.130898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:37.131261 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.4M memory peak. Jul 10 00:24:38.001996 containerd[1742]: time="2025-07-10T00:24:38.001951317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:38.004925 containerd[1742]: time="2025-07-10T00:24:38.004886158Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jul 10 00:24:38.008655 containerd[1742]: time="2025-07-10T00:24:38.008614052Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:38.013003 containerd[1742]: time="2025-07-10T00:24:38.012954149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:38.013938 containerd[1742]: time="2025-07-10T00:24:38.013738999Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.984265895s" Jul 10 00:24:38.013938 containerd[1742]: time="2025-07-10T00:24:38.013770282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 10 00:24:38.014258 containerd[1742]: time="2025-07-10T00:24:38.014235686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:24:38.582009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1574056864.mount: Deactivated successfully. Jul 10 00:24:38.600128 containerd[1742]: time="2025-07-10T00:24:38.600095096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:24:38.604174 containerd[1742]: time="2025-07-10T00:24:38.604138605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 10 00:24:38.607441 containerd[1742]: time="2025-07-10T00:24:38.607402885Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:24:38.613552 containerd[1742]: time="2025-07-10T00:24:38.613513544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:24:38.614313 containerd[1742]: time="2025-07-10T00:24:38.614023060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.758954ms" Jul 10 00:24:38.614313 containerd[1742]: time="2025-07-10T00:24:38.614058514Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:24:38.614597 containerd[1742]: time="2025-07-10T00:24:38.614578501Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 00:24:39.172566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800616894.mount: Deactivated successfully. Jul 10 00:24:40.737240 containerd[1742]: time="2025-07-10T00:24:40.737194292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:40.739302 containerd[1742]: time="2025-07-10T00:24:40.739270636Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jul 10 00:24:40.741709 containerd[1742]: time="2025-07-10T00:24:40.741669498Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:40.745446 containerd[1742]: time="2025-07-10T00:24:40.745413132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:40.746215 containerd[1742]: time="2025-07-10T00:24:40.746097758Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.131492702s" Jul 10 00:24:40.746215 containerd[1742]: time="2025-07-10T00:24:40.746128777Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 10 00:24:41.334349 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 10 00:24:42.095815 update_engine[1710]: I20250710 00:24:42.095191 1710 update_attempter.cc:509] Updating boot flags... Jul 10 00:24:42.830068 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:42.830364 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.4M memory peak. Jul 10 00:24:42.832815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:42.854484 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-9.scope)... Jul 10 00:24:42.854505 systemd[1]: Reloading... Jul 10 00:24:42.942740 zram_generator::config[2705]: No configuration found. Jul 10 00:24:43.091983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:24:43.181349 systemd[1]: Reloading finished in 326 ms. Jul 10 00:24:43.221190 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:24:43.221274 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:24:43.221620 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:43.221675 systemd[1]: kubelet.service: Consumed 85ms CPU time, 91.9M memory peak. Jul 10 00:24:43.223059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:43.974545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:43.982942 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:24:44.016799 kubelet[2772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:24:44.016799 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:24:44.016799 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:24:44.017038 kubelet[2772]: I0710 00:24:44.016822 2772 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:24:44.149409 kubelet[2772]: I0710 00:24:44.149380 2772 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:24:44.149409 kubelet[2772]: I0710 00:24:44.149403 2772 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:24:44.149577 kubelet[2772]: I0710 00:24:44.149567 2772 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:24:44.181743 kubelet[2772]: E0710 00:24:44.181717 2772 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:24:44.181903 kubelet[2772]: I0710 00:24:44.181886 2772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:24:44.187272 kubelet[2772]: I0710 00:24:44.187255 2772 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:24:44.190046 kubelet[2772]: I0710 00:24:44.190027 2772 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:24:44.190228 kubelet[2772]: I0710 00:24:44.190210 2772 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:24:44.190363 kubelet[2772]: I0710 00:24:44.190228 2772 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-69725f0cc9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:24:44.190479 kubelet[2772]: I0710 00:24:44.190370 2772 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:24:44.190479 kubelet[2772]: I0710 00:24:44.190379 2772 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:24:44.190479 kubelet[2772]: I0710 00:24:44.190478 2772 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:24:44.192752 kubelet[2772]: I0710 00:24:44.192739 2772 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:24:44.192807 kubelet[2772]: I0710 00:24:44.192759 2772 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:24:44.192807 kubelet[2772]: I0710 00:24:44.192782 2772 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:24:44.192807 kubelet[2772]: I0710 00:24:44.192795 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:24:44.199172 kubelet[2772]: E0710 00:24:44.199143 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:24:44.199400 kubelet[2772]: E0710 00:24:44.199228 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-69725f0cc9&limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:24:44.199637 kubelet[2772]: I0710 00:24:44.199623 2772 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:24:44.200185 kubelet[2772]: I0710 00:24:44.200122 2772 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:24:44.200944 kubelet[2772]: W0710 00:24:44.200929 2772 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:24:44.203184 kubelet[2772]: I0710 00:24:44.203103 2772 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:24:44.203286 kubelet[2772]: I0710 00:24:44.203279 2772 server.go:1289] "Started kubelet" Jul 10 00:24:44.209144 kubelet[2772]: I0710 00:24:44.209125 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:24:44.209940 kubelet[2772]: I0710 00:24:44.209915 2772 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:24:44.210587 kubelet[2772]: I0710 00:24:44.210551 2772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:24:44.210884 kubelet[2772]: I0710 00:24:44.210872 2772 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:24:44.213028 kubelet[2772]: I0710 00:24:44.213006 2772 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:24:44.213751 kubelet[2772]: I0710 00:24:44.213736 2772 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:24:44.214029 kubelet[2772]: I0710 00:24:44.214014 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:24:44.214971 kubelet[2772]: I0710 00:24:44.214953 2772 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:24:44.215027 kubelet[2772]: I0710 00:24:44.214988 2772 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:24:44.216538 kubelet[2772]: E0710 00:24:44.216289 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:24:44.216538 kubelet[2772]: E0710 00:24:44.216470 2772 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" Jul 10 00:24:44.218163 kubelet[2772]: E0710 00:24:44.218138 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-69725f0cc9?timeout=10s\": dial tcp 10.200.8.5:6443: connect: connection refused" interval="200ms" Jul 10 00:24:44.221990 kubelet[2772]: E0710 00:24:44.219234 2772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.5:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-69725f0cc9.1850bc1c2a5f5704 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-69725f0cc9,UID:ci-4344.1.1-n-69725f0cc9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-69725f0cc9,},FirstTimestamp:2025-07-10 00:24:44.203120388 +0000 UTC m=+0.216660590,LastTimestamp:2025-07-10 00:24:44.203120388 +0000 UTC m=+0.216660590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-69725f0cc9,}" Jul 10 00:24:44.225772 kubelet[2772]: I0710 00:24:44.224233 2772 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:24:44.225772 kubelet[2772]: I0710 00:24:44.224246 2772 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:24:44.225772 kubelet[2772]: I0710 00:24:44.224306 2772 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:24:44.230199 kubelet[2772]: E0710 00:24:44.230137 2772 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:24:44.236993 kubelet[2772]: I0710 00:24:44.236970 2772 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:24:44.237783 kubelet[2772]: I0710 00:24:44.237769 2772 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:24:44.237843 kubelet[2772]: I0710 00:24:44.237838 2772 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:24:44.237878 kubelet[2772]: I0710 00:24:44.237870 2772 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:24:44.237899 kubelet[2772]: I0710 00:24:44.237896 2772 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:24:44.237943 kubelet[2772]: E0710 00:24:44.237935 2772 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:24:44.248483 kubelet[2772]: E0710 00:24:44.248467 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:24:44.251123 kubelet[2772]: I0710 00:24:44.251105 2772 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:24:44.251206 kubelet[2772]: I0710 00:24:44.251200 2772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:24:44.251252 kubelet[2772]: I0710 00:24:44.251248 2772 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:24:44.260144 kubelet[2772]: I0710 00:24:44.260133 2772 policy_none.go:49] "None policy: Start" Jul 10 00:24:44.260207 kubelet[2772]: I0710 00:24:44.260202 2772 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:24:44.260235 kubelet[2772]: I0710 00:24:44.260232 2772 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:24:44.267573 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:24:44.276551 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:24:44.278893 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:24:44.292215 kubelet[2772]: E0710 00:24:44.292196 2772 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:24:44.292397 kubelet[2772]: I0710 00:24:44.292337 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:24:44.292397 kubelet[2772]: I0710 00:24:44.292352 2772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:24:44.292769 kubelet[2772]: I0710 00:24:44.292752 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:24:44.293409 kubelet[2772]: E0710 00:24:44.293393 2772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:24:44.293557 kubelet[2772]: E0710 00:24:44.293514 2772 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-69725f0cc9\" not found" Jul 10 00:24:44.347881 systemd[1]: Created slice kubepods-burstable-pod014b1d1bb5f763745c66b86d069b0e3d.slice - libcontainer container kubepods-burstable-pod014b1d1bb5f763745c66b86d069b0e3d.slice. Jul 10 00:24:44.355448 kubelet[2772]: E0710 00:24:44.355298 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.358843 systemd[1]: Created slice kubepods-burstable-podbc4a5ed4afc59072b01dc8672b4d1891.slice - libcontainer container kubepods-burstable-podbc4a5ed4afc59072b01dc8672b4d1891.slice. Jul 10 00:24:44.364535 kubelet[2772]: E0710 00:24:44.364518 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.366772 systemd[1]: Created slice kubepods-burstable-podb84c9a22644bd438208384c1d3a5bbd2.slice - libcontainer container kubepods-burstable-podb84c9a22644bd438208384c1d3a5bbd2.slice. Jul 10 00:24:44.368433 kubelet[2772]: E0710 00:24:44.368417 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.375778 kubelet[2772]: E0710 00:24:44.375684 2772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.5:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-69725f0cc9.1850bc1c2a5f5704 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-69725f0cc9,UID:ci-4344.1.1-n-69725f0cc9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-69725f0cc9,},FirstTimestamp:2025-07-10 00:24:44.203120388 +0000 UTC m=+0.216660590,LastTimestamp:2025-07-10 00:24:44.203120388 +0000 UTC m=+0.216660590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-69725f0cc9,}" Jul 10 00:24:44.393646 kubelet[2772]: I0710 00:24:44.393632 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.393910 kubelet[2772]: E0710 00:24:44.393894 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.5:6443/api/v1/nodes\": dial tcp 10.200.8.5:6443: connect: connection refused" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416143 kubelet[2772]: I0710 00:24:44.416111 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416190 kubelet[2772]: I0710 00:24:44.416147 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014b1d1bb5f763745c66b86d069b0e3d-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-69725f0cc9\" (UID: \"014b1d1bb5f763745c66b86d069b0e3d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416190 kubelet[2772]: I0710 00:24:44.416164 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014b1d1bb5f763745c66b86d069b0e3d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-69725f0cc9\" (UID: \"014b1d1bb5f763745c66b86d069b0e3d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416245 kubelet[2772]: I0710 00:24:44.416185 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416245 kubelet[2772]: I0710 00:24:44.416206 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416245 kubelet[2772]: I0710 00:24:44.416224 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b84c9a22644bd438208384c1d3a5bbd2-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-69725f0cc9\" (UID: \"b84c9a22644bd438208384c1d3a5bbd2\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416245 kubelet[2772]: I0710 00:24:44.416240 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014b1d1bb5f763745c66b86d069b0e3d-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-69725f0cc9\" (UID: \"014b1d1bb5f763745c66b86d069b0e3d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416336 kubelet[2772]: I0710 00:24:44.416256 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.416336 kubelet[2772]: I0710 00:24:44.416272 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.419529 kubelet[2772]: E0710 00:24:44.419513 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-69725f0cc9?timeout=10s\": dial tcp 10.200.8.5:6443: connect: connection refused" interval="400ms" Jul 10 00:24:44.595497 kubelet[2772]: I0710 00:24:44.595340 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.595768 kubelet[2772]: E0710 00:24:44.595644 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.5:6443/api/v1/nodes\": dial tcp 10.200.8.5:6443: connect: connection refused" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:44.656936 containerd[1742]: time="2025-07-10T00:24:44.656896141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-69725f0cc9,Uid:014b1d1bb5f763745c66b86d069b0e3d,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:44.665325 containerd[1742]: time="2025-07-10T00:24:44.665295374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-69725f0cc9,Uid:bc4a5ed4afc59072b01dc8672b4d1891,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:44.668989 containerd[1742]: time="2025-07-10T00:24:44.668962428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-69725f0cc9,Uid:b84c9a22644bd438208384c1d3a5bbd2,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:44.717094 containerd[1742]: time="2025-07-10T00:24:44.717071053Z" level=info msg="connecting to shim 05d8b9e78f942f2beef3f3f3365827ef2bd3ff61ce9b00659e7a2bc7265f480f" address="unix:///run/containerd/s/49e8dbd8cbfacaf5c43eb1268431b2ac617ba5cc5120e315765bde93535e9b57" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:44.746070 containerd[1742]: time="2025-07-10T00:24:44.746030819Z" level=info msg="connecting to shim 29afa516aa07f104bed716ccabed6ebb6393cacd139e3daa3fe574de090235aa" address="unix:///run/containerd/s/5425ca386265a11373ea91287229d8022f55f3d04a28f8bde00e39dfd323e0e1" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:44.746861 systemd[1]: Started cri-containerd-05d8b9e78f942f2beef3f3f3365827ef2bd3ff61ce9b00659e7a2bc7265f480f.scope - libcontainer container 05d8b9e78f942f2beef3f3f3365827ef2bd3ff61ce9b00659e7a2bc7265f480f. Jul 10 00:24:44.774364 containerd[1742]: time="2025-07-10T00:24:44.774332432Z" level=info msg="connecting to shim c058657a1371d94efd03197064df33143bcbae5b96d9104c59c6ba376c2542ac" address="unix:///run/containerd/s/9a40dddb67b37193d6bf67bd9e643af2b37a02e39a5c22987e8e73d32474bd76" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:44.779877 systemd[1]: Started cri-containerd-29afa516aa07f104bed716ccabed6ebb6393cacd139e3daa3fe574de090235aa.scope - libcontainer container 29afa516aa07f104bed716ccabed6ebb6393cacd139e3daa3fe574de090235aa. Jul 10 00:24:44.797933 systemd[1]: Started cri-containerd-c058657a1371d94efd03197064df33143bcbae5b96d9104c59c6ba376c2542ac.scope - libcontainer container c058657a1371d94efd03197064df33143bcbae5b96d9104c59c6ba376c2542ac. Jul 10 00:24:44.820456 kubelet[2772]: E0710 00:24:44.820421 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-69725f0cc9?timeout=10s\": dial tcp 10.200.8.5:6443: connect: connection refused" interval="800ms" Jul 10 00:24:44.831505 containerd[1742]: time="2025-07-10T00:24:44.831448273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-69725f0cc9,Uid:014b1d1bb5f763745c66b86d069b0e3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"05d8b9e78f942f2beef3f3f3365827ef2bd3ff61ce9b00659e7a2bc7265f480f\"" Jul 10 00:24:44.840592 containerd[1742]: time="2025-07-10T00:24:44.840521287Z" level=info msg="CreateContainer within sandbox \"05d8b9e78f942f2beef3f3f3365827ef2bd3ff61ce9b00659e7a2bc7265f480f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:24:44.842582 containerd[1742]: time="2025-07-10T00:24:44.842562576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-69725f0cc9,Uid:bc4a5ed4afc59072b01dc8672b4d1891,Namespace:kube-system,Attempt:0,} returns sandbox id \"29afa516aa07f104bed716ccabed6ebb6393cacd139e3daa3fe574de090235aa\"" Jul 10 00:24:44.848852 containerd[1742]: time="2025-07-10T00:24:44.848321809Z" level=info msg="CreateContainer within sandbox \"29afa516aa07f104bed716ccabed6ebb6393cacd139e3daa3fe574de090235aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:24:44.863300 containerd[1742]: time="2025-07-10T00:24:44.863280081Z" level=info msg="Container 17949fc4cd9cc7512122e05ba3b1e0d0f1b5d77390ae3c90d9c5738405bfa376: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:44.869633 containerd[1742]: time="2025-07-10T00:24:44.869609799Z" level=info msg="Container 83ecca5d852860571b9535dcd23964fabb10d509b4bf080e08586191b42f94d1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:44.880975 containerd[1742]: time="2025-07-10T00:24:44.880959140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-69725f0cc9,Uid:b84c9a22644bd438208384c1d3a5bbd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c058657a1371d94efd03197064df33143bcbae5b96d9104c59c6ba376c2542ac\"" Jul 10 00:24:44.889266 containerd[1742]: time="2025-07-10T00:24:44.889243485Z" level=info msg="CreateContainer within sandbox \"c058657a1371d94efd03197064df33143bcbae5b96d9104c59c6ba376c2542ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:24:44.904179 containerd[1742]: time="2025-07-10T00:24:44.904155937Z" level=info msg="CreateContainer within sandbox \"05d8b9e78f942f2beef3f3f3365827ef2bd3ff61ce9b00659e7a2bc7265f480f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"17949fc4cd9cc7512122e05ba3b1e0d0f1b5d77390ae3c90d9c5738405bfa376\"" Jul 10 00:24:44.904552 containerd[1742]: time="2025-07-10T00:24:44.904532742Z" level=info msg="StartContainer for \"17949fc4cd9cc7512122e05ba3b1e0d0f1b5d77390ae3c90d9c5738405bfa376\"" Jul 10 00:24:44.905219 containerd[1742]: time="2025-07-10T00:24:44.905195813Z" level=info msg="connecting to shim 17949fc4cd9cc7512122e05ba3b1e0d0f1b5d77390ae3c90d9c5738405bfa376" address="unix:///run/containerd/s/49e8dbd8cbfacaf5c43eb1268431b2ac617ba5cc5120e315765bde93535e9b57" protocol=ttrpc version=3 Jul 10 00:24:44.918866 systemd[1]: Started cri-containerd-17949fc4cd9cc7512122e05ba3b1e0d0f1b5d77390ae3c90d9c5738405bfa376.scope - libcontainer container 17949fc4cd9cc7512122e05ba3b1e0d0f1b5d77390ae3c90d9c5738405bfa376. Jul 10 00:24:44.925730 containerd[1742]: time="2025-07-10T00:24:44.925690083Z" level=info msg="CreateContainer within sandbox \"29afa516aa07f104bed716ccabed6ebb6393cacd139e3daa3fe574de090235aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"83ecca5d852860571b9535dcd23964fabb10d509b4bf080e08586191b42f94d1\"" Jul 10 00:24:44.926233 containerd[1742]: time="2025-07-10T00:24:44.926213601Z" level=info msg="StartContainer for \"83ecca5d852860571b9535dcd23964fabb10d509b4bf080e08586191b42f94d1\"" Jul 10 00:24:44.927238 containerd[1742]: time="2025-07-10T00:24:44.927209619Z" level=info msg="connecting to shim 83ecca5d852860571b9535dcd23964fabb10d509b4bf080e08586191b42f94d1" address="unix:///run/containerd/s/5425ca386265a11373ea91287229d8022f55f3d04a28f8bde00e39dfd323e0e1" protocol=ttrpc version=3 Jul 10 00:24:44.929960 containerd[1742]: time="2025-07-10T00:24:44.929847908Z" level=info msg="Container 11818973c319bde1ab5d1138be83b25a5429540f7d4601679c83beca82e41441: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:44.944819 systemd[1]: Started cri-containerd-83ecca5d852860571b9535dcd23964fabb10d509b4bf080e08586191b42f94d1.scope - libcontainer container 83ecca5d852860571b9535dcd23964fabb10d509b4bf080e08586191b42f94d1. Jul 10 00:24:44.964372 containerd[1742]: time="2025-07-10T00:24:44.964330931Z" level=info msg="CreateContainer within sandbox \"c058657a1371d94efd03197064df33143bcbae5b96d9104c59c6ba376c2542ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11818973c319bde1ab5d1138be83b25a5429540f7d4601679c83beca82e41441\"" Jul 10 00:24:44.967328 containerd[1742]: time="2025-07-10T00:24:44.967300753Z" level=info msg="StartContainer for \"11818973c319bde1ab5d1138be83b25a5429540f7d4601679c83beca82e41441\"" Jul 10 00:24:44.968409 containerd[1742]: time="2025-07-10T00:24:44.968374609Z" level=info msg="connecting to shim 11818973c319bde1ab5d1138be83b25a5429540f7d4601679c83beca82e41441" address="unix:///run/containerd/s/9a40dddb67b37193d6bf67bd9e643af2b37a02e39a5c22987e8e73d32474bd76" protocol=ttrpc version=3 Jul 10 00:24:44.999000 systemd[1]: Started cri-containerd-11818973c319bde1ab5d1138be83b25a5429540f7d4601679c83beca82e41441.scope - libcontainer container 11818973c319bde1ab5d1138be83b25a5429540f7d4601679c83beca82e41441. Jul 10 00:24:45.001109 kubelet[2772]: I0710 00:24:45.000920 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:45.002255 containerd[1742]: time="2025-07-10T00:24:45.001823364Z" level=info msg="StartContainer for \"17949fc4cd9cc7512122e05ba3b1e0d0f1b5d77390ae3c90d9c5738405bfa376\" returns successfully" Jul 10 00:24:45.002330 kubelet[2772]: E0710 00:24:45.002087 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.5:6443/api/v1/nodes\": dial tcp 10.200.8.5:6443: connect: connection refused" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:45.012491 containerd[1742]: time="2025-07-10T00:24:45.012393703Z" level=info msg="StartContainer for \"83ecca5d852860571b9535dcd23964fabb10d509b4bf080e08586191b42f94d1\" returns successfully" Jul 10 00:24:45.076802 containerd[1742]: time="2025-07-10T00:24:45.076781802Z" level=info msg="StartContainer for \"11818973c319bde1ab5d1138be83b25a5429540f7d4601679c83beca82e41441\" returns successfully" Jul 10 00:24:45.259810 kubelet[2772]: E0710 00:24:45.258081 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:45.260550 kubelet[2772]: E0710 00:24:45.259749 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:45.262976 kubelet[2772]: E0710 00:24:45.262864 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:45.804733 kubelet[2772]: I0710 00:24:45.804647 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:46.267780 kubelet[2772]: E0710 00:24:46.266206 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:46.268675 kubelet[2772]: E0710 00:24:46.268662 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:46.547812 kubelet[2772]: E0710 00:24:46.546580 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:46.788612 kubelet[2772]: E0710 00:24:46.788571 2772 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-69725f0cc9\" not found" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:46.925812 kubelet[2772]: I0710 00:24:46.925609 2772 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:47.016935 kubelet[2772]: I0710 00:24:47.016873 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:47.021637 kubelet[2772]: E0710 00:24:47.021346 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-69725f0cc9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:47.021637 kubelet[2772]: I0710 00:24:47.021366 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:47.024206 kubelet[2772]: E0710 00:24:47.024170 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:47.024449 kubelet[2772]: I0710 00:24:47.024302 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:47.026883 kubelet[2772]: E0710 00:24:47.026862 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-69725f0cc9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:47.200907 kubelet[2772]: I0710 00:24:47.200818 2772 apiserver.go:52] "Watching apiserver" Jul 10 00:24:47.215781 kubelet[2772]: I0710 00:24:47.215759 2772 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:24:47.265787 kubelet[2772]: I0710 00:24:47.265769 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:47.267218 kubelet[2772]: E0710 00:24:47.267188 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-69725f0cc9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:48.927844 systemd[1]: Reload requested from client PID 3049 ('systemctl') (unit session-9.scope)... Jul 10 00:24:48.927858 systemd[1]: Reloading... Jul 10 00:24:49.001732 zram_generator::config[3091]: No configuration found. Jul 10 00:24:49.094622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:24:49.196501 systemd[1]: Reloading finished in 268 ms. Jul 10 00:24:49.215422 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:49.228362 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:24:49.228588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:49.228631 systemd[1]: kubelet.service: Consumed 505ms CPU time, 129.8M memory peak. Jul 10 00:24:49.229945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:49.726926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:49.736992 (kubelet)[3162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:24:49.773684 kubelet[3162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:24:49.773684 kubelet[3162]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:24:49.773684 kubelet[3162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:24:49.773974 kubelet[3162]: I0710 00:24:49.773746 3162 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:24:49.782229 kubelet[3162]: I0710 00:24:49.781558 3162 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:24:49.782229 kubelet[3162]: I0710 00:24:49.781579 3162 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:24:49.782229 kubelet[3162]: I0710 00:24:49.781935 3162 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:24:49.784309 kubelet[3162]: I0710 00:24:49.784282 3162 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 00:24:49.786071 kubelet[3162]: I0710 00:24:49.786055 3162 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:24:49.789416 kubelet[3162]: I0710 00:24:49.789404 3162 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:24:49.791335 kubelet[3162]: I0710 00:24:49.791324 3162 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:24:49.791534 kubelet[3162]: I0710 00:24:49.791515 3162 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:24:49.791671 kubelet[3162]: I0710 00:24:49.791572 3162 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-69725f0cc9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:24:49.791775 kubelet[3162]: I0710 00:24:49.791742 3162 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:24:49.791775 kubelet[3162]: I0710 00:24:49.791757 3162 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:24:49.791824 kubelet[3162]: I0710 00:24:49.791795 3162 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:24:49.791929 kubelet[3162]: I0710 00:24:49.791912 3162 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:24:49.791956 kubelet[3162]: I0710 00:24:49.791933 3162 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:24:49.791979 kubelet[3162]: I0710 00:24:49.791956 3162 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:24:49.791979 kubelet[3162]: I0710 00:24:49.791969 3162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:24:49.795723 kubelet[3162]: I0710 00:24:49.795275 3162 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:24:49.795793 kubelet[3162]: I0710 00:24:49.795727 3162 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:24:49.803071 kubelet[3162]: I0710 00:24:49.803054 3162 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:24:49.803135 kubelet[3162]: I0710 00:24:49.803092 3162 server.go:1289] "Started kubelet" Jul 10 00:24:49.807620 kubelet[3162]: I0710 00:24:49.806376 3162 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:24:49.807620 kubelet[3162]: I0710 00:24:49.807543 3162 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:24:49.808971 kubelet[3162]: I0710 00:24:49.808683 3162 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:24:49.809278 kubelet[3162]: I0710 00:24:49.809255 3162 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:24:49.810188 kubelet[3162]: I0710 00:24:49.810168 3162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:24:49.813978 kubelet[3162]: I0710 00:24:49.813957 3162 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:24:49.817798 kubelet[3162]: I0710 00:24:49.817785 3162 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:24:49.819459 kubelet[3162]: I0710 00:24:49.819440 3162 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:24:49.819530 kubelet[3162]: I0710 00:24:49.819513 3162 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:24:49.820173 kubelet[3162]: I0710 00:24:49.820147 3162 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:24:49.820321 kubelet[3162]: I0710 00:24:49.820314 3162 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:24:49.822327 kubelet[3162]: E0710 00:24:49.821815 3162 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:24:49.822327 kubelet[3162]: I0710 00:24:49.821967 3162 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:24:49.823725 kubelet[3162]: I0710 00:24:49.823073 3162 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:24:49.824678 kubelet[3162]: I0710 00:24:49.824659 3162 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:24:49.824678 kubelet[3162]: I0710 00:24:49.824680 3162 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:24:49.824821 kubelet[3162]: I0710 00:24:49.824730 3162 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:24:49.824821 kubelet[3162]: I0710 00:24:49.824738 3162 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:24:49.824821 kubelet[3162]: E0710 00:24:49.824763 3162 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:24:49.864544 kubelet[3162]: I0710 00:24:49.864521 3162 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:24:49.864544 kubelet[3162]: I0710 00:24:49.864533 3162 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:24:49.864639 kubelet[3162]: I0710 00:24:49.864550 3162 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:24:49.864741 kubelet[3162]: I0710 00:24:49.864660 3162 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:24:49.864741 kubelet[3162]: I0710 00:24:49.864670 3162 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:24:49.864741 kubelet[3162]: I0710 00:24:49.864724 3162 policy_none.go:49] "None policy: Start" Jul 10 00:24:49.864741 kubelet[3162]: I0710 00:24:49.864739 3162 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:24:49.864842 kubelet[3162]: I0710 00:24:49.864748 3162 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:24:49.864842 kubelet[3162]: I0710 00:24:49.864828 3162 state_mem.go:75] "Updated machine memory state" Jul 10 00:24:49.867839 kubelet[3162]: E0710 00:24:49.867565 3162 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:24:49.867839 kubelet[3162]: I0710 00:24:49.867659 3162 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:24:49.867839 kubelet[3162]: I0710 00:24:49.867666 3162 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:24:49.868358 kubelet[3162]: I0710 00:24:49.868347 3162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:24:49.870339 kubelet[3162]: E0710 00:24:49.870325 3162 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:24:49.925435 kubelet[3162]: I0710 00:24:49.925416 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:49.925526 kubelet[3162]: I0710 00:24:49.925517 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:49.926902 kubelet[3162]: I0710 00:24:49.925999 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:49.937553 kubelet[3162]: I0710 00:24:49.937489 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:24:49.937947 kubelet[3162]: I0710 00:24:49.937934 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:24:49.938444 kubelet[3162]: I0710 00:24:49.938431 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:24:49.940847 sudo[3200]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:24:49.941071 sudo[3200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:24:49.975008 kubelet[3162]: I0710 00:24:49.974993 3162 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:49.989751 kubelet[3162]: I0710 00:24:49.988887 3162 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:49.989751 kubelet[3162]: I0710 00:24:49.988963 3162 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021731 kubelet[3162]: I0710 00:24:50.021690 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021799 kubelet[3162]: I0710 00:24:50.021744 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021799 kubelet[3162]: I0710 00:24:50.021769 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021799 kubelet[3162]: I0710 00:24:50.021784 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021871 kubelet[3162]: I0710 00:24:50.021802 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc4a5ed4afc59072b01dc8672b4d1891-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" (UID: \"bc4a5ed4afc59072b01dc8672b4d1891\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021871 kubelet[3162]: I0710 00:24:50.021841 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b84c9a22644bd438208384c1d3a5bbd2-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-69725f0cc9\" (UID: \"b84c9a22644bd438208384c1d3a5bbd2\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021871 kubelet[3162]: I0710 00:24:50.021859 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014b1d1bb5f763745c66b86d069b0e3d-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-69725f0cc9\" (UID: \"014b1d1bb5f763745c66b86d069b0e3d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021945 kubelet[3162]: I0710 00:24:50.021874 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014b1d1bb5f763745c66b86d069b0e3d-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-69725f0cc9\" (UID: \"014b1d1bb5f763745c66b86d069b0e3d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.021945 kubelet[3162]: I0710 00:24:50.021890 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014b1d1bb5f763745c66b86d069b0e3d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-69725f0cc9\" (UID: \"014b1d1bb5f763745c66b86d069b0e3d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.435503 sudo[3200]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:50.792866 kubelet[3162]: I0710 00:24:50.792779 3162 apiserver.go:52] "Watching apiserver" Jul 10 00:24:50.820386 kubelet[3162]: I0710 00:24:50.820362 3162 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:24:50.852344 kubelet[3162]: I0710 00:24:50.852010 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.853897 kubelet[3162]: I0710 00:24:50.852813 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.854044 kubelet[3162]: I0710 00:24:50.852920 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.864503 kubelet[3162]: I0710 00:24:50.864383 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" podStartSLOduration=1.864371362 podStartE2EDuration="1.864371362s" podCreationTimestamp="2025-07-10 00:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:24:50.864063452 +0000 UTC m=+1.123449777" watchObservedRunningTime="2025-07-10 00:24:50.864371362 +0000 UTC m=+1.123757683" Jul 10 00:24:50.865012 kubelet[3162]: I0710 00:24:50.864778 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" podStartSLOduration=1.86477099 podStartE2EDuration="1.86477099s" podCreationTimestamp="2025-07-10 00:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:24:50.850572064 +0000 UTC m=+1.109958384" watchObservedRunningTime="2025-07-10 00:24:50.86477099 +0000 UTC m=+1.124157308" Jul 10 00:24:50.866084 kubelet[3162]: I0710 00:24:50.866029 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:24:50.866223 kubelet[3162]: E0710 00:24:50.866154 3162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-69725f0cc9\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.867533 kubelet[3162]: I0710 00:24:50.867478 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:24:50.867725 kubelet[3162]: E0710 00:24:50.867715 3162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-69725f0cc9\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.868052 kubelet[3162]: I0710 00:24:50.867649 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:24:50.868790 kubelet[3162]: E0710 00:24:50.868450 3162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-69725f0cc9\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-69725f0cc9" Jul 10 00:24:50.889974 kubelet[3162]: I0710 00:24:50.889937 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-69725f0cc9" podStartSLOduration=1.889924933 podStartE2EDuration="1.889924933s" podCreationTimestamp="2025-07-10 00:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:24:50.878133062 +0000 UTC m=+1.137519382" watchObservedRunningTime="2025-07-10 00:24:50.889924933 +0000 UTC m=+1.149311252" Jul 10 00:24:51.901392 sudo[2173]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:52.001220 sshd[2172]: Connection closed by 10.200.16.10 port 41472 Jul 10 00:24:52.001663 sshd-session[2170]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:52.004379 systemd[1]: sshd@6-10.200.8.5:22-10.200.16.10:41472.service: Deactivated successfully. Jul 10 00:24:52.006379 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:24:52.006578 systemd[1]: session-9.scope: Consumed 3.634s CPU time, 274M memory peak. Jul 10 00:24:52.009011 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:24:52.009912 systemd-logind[1709]: Removed session 9. Jul 10 00:24:54.084206 kubelet[3162]: I0710 00:24:54.084173 3162 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:24:54.084746 kubelet[3162]: I0710 00:24:54.084623 3162 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:24:54.085061 containerd[1742]: time="2025-07-10T00:24:54.084465911Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:24:55.085171 systemd[1]: Created slice kubepods-besteffort-podca2eed68_6a1a_471f_9ad2_d5c1f93e6fe8.slice - libcontainer container kubepods-besteffort-podca2eed68_6a1a_471f_9ad2_d5c1f93e6fe8.slice. Jul 10 00:24:55.098952 systemd[1]: Created slice kubepods-burstable-podde61c3b0_1008_4f61_b862_a166ebc14d6a.slice - libcontainer container kubepods-burstable-podde61c3b0_1008_4f61_b862_a166ebc14d6a.slice. Jul 10 00:24:55.156973 kubelet[3162]: I0710 00:24:55.156938 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-host-proc-sys-net\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157242 kubelet[3162]: I0710 00:24:55.156999 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-host-proc-sys-kernel\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157242 kubelet[3162]: I0710 00:24:55.157020 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8-lib-modules\") pod \"kube-proxy-psr8z\" (UID: \"ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8\") " pod="kube-system/kube-proxy-psr8z" Jul 10 00:24:55.157242 kubelet[3162]: I0710 00:24:55.157046 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-run\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157242 kubelet[3162]: I0710 00:24:55.157062 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-hostproc\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157242 kubelet[3162]: I0710 00:24:55.157078 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-etc-cni-netd\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157242 kubelet[3162]: I0710 00:24:55.157091 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de61c3b0-1008-4f61-b862-a166ebc14d6a-hubble-tls\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157385 kubelet[3162]: I0710 00:24:55.157107 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzb7f\" (UniqueName: \"kubernetes.io/projected/ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8-kube-api-access-gzb7f\") pod \"kube-proxy-psr8z\" (UID: \"ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8\") " pod="kube-system/kube-proxy-psr8z" Jul 10 00:24:55.157385 kubelet[3162]: I0710 00:24:55.157132 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-bpf-maps\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157385 kubelet[3162]: I0710 00:24:55.157148 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-cgroup\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157385 kubelet[3162]: I0710 00:24:55.157162 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cni-path\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157385 kubelet[3162]: I0710 00:24:55.157177 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-config-path\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157475 kubelet[3162]: I0710 00:24:55.157199 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mfz\" (UniqueName: \"kubernetes.io/projected/de61c3b0-1008-4f61-b862-a166ebc14d6a-kube-api-access-h7mfz\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157475 kubelet[3162]: I0710 00:24:55.157215 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8-kube-proxy\") pod \"kube-proxy-psr8z\" (UID: \"ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8\") " pod="kube-system/kube-proxy-psr8z" Jul 10 00:24:55.157475 kubelet[3162]: I0710 00:24:55.157228 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8-xtables-lock\") pod \"kube-proxy-psr8z\" (UID: \"ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8\") " pod="kube-system/kube-proxy-psr8z" Jul 10 00:24:55.157475 kubelet[3162]: I0710 00:24:55.157242 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-lib-modules\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157475 kubelet[3162]: I0710 00:24:55.157269 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-xtables-lock\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.157475 kubelet[3162]: I0710 00:24:55.157286 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de61c3b0-1008-4f61-b862-a166ebc14d6a-clustermesh-secrets\") pod \"cilium-9jrsn\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " pod="kube-system/cilium-9jrsn" Jul 10 00:24:55.332065 systemd[1]: Created slice kubepods-besteffort-pod9d5b6c75_cd14_4efd_9590_8c60684cd143.slice - libcontainer container kubepods-besteffort-pod9d5b6c75_cd14_4efd_9590_8c60684cd143.slice. Jul 10 00:24:55.359293 kubelet[3162]: I0710 00:24:55.359182 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d5b6c75-cd14-4efd-9590-8c60684cd143-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-467dh\" (UID: \"9d5b6c75-cd14-4efd-9590-8c60684cd143\") " pod="kube-system/cilium-operator-6c4d7847fc-467dh" Jul 10 00:24:55.359293 kubelet[3162]: I0710 00:24:55.359239 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-552p8\" (UniqueName: \"kubernetes.io/projected/9d5b6c75-cd14-4efd-9590-8c60684cd143-kube-api-access-552p8\") pod \"cilium-operator-6c4d7847fc-467dh\" (UID: \"9d5b6c75-cd14-4efd-9590-8c60684cd143\") " pod="kube-system/cilium-operator-6c4d7847fc-467dh" Jul 10 00:24:55.396780 containerd[1742]: time="2025-07-10T00:24:55.396748770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-psr8z,Uid:ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:55.403291 containerd[1742]: time="2025-07-10T00:24:55.403263417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jrsn,Uid:de61c3b0-1008-4f61-b862-a166ebc14d6a,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:55.442758 containerd[1742]: time="2025-07-10T00:24:55.442666499Z" level=info msg="connecting to shim b2018b36940e95e701376d489da8a1f84f372bcc8c0ed3c4829a6472bbcf739d" address="unix:///run/containerd/s/bb06223347b6932c4529a21fcd51417c96a934240ea1085d46adc3d93ad0318a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:55.464078 containerd[1742]: time="2025-07-10T00:24:55.462805813Z" level=info msg="connecting to shim 5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7" address="unix:///run/containerd/s/2a619d4f5565d421e68f605cdcf812a249e211900b0fee3f08d42395c0bc0ab8" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:55.468860 systemd[1]: Started cri-containerd-b2018b36940e95e701376d489da8a1f84f372bcc8c0ed3c4829a6472bbcf739d.scope - libcontainer container b2018b36940e95e701376d489da8a1f84f372bcc8c0ed3c4829a6472bbcf739d. Jul 10 00:24:55.494839 systemd[1]: Started cri-containerd-5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7.scope - libcontainer container 5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7. Jul 10 00:24:55.498566 containerd[1742]: time="2025-07-10T00:24:55.498532423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-psr8z,Uid:ca2eed68-6a1a-471f-9ad2-d5c1f93e6fe8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2018b36940e95e701376d489da8a1f84f372bcc8c0ed3c4829a6472bbcf739d\"" Jul 10 00:24:55.506829 containerd[1742]: time="2025-07-10T00:24:55.506806183Z" level=info msg="CreateContainer within sandbox \"b2018b36940e95e701376d489da8a1f84f372bcc8c0ed3c4829a6472bbcf739d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:24:55.522147 containerd[1742]: time="2025-07-10T00:24:55.522115830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jrsn,Uid:de61c3b0-1008-4f61-b862-a166ebc14d6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\"" Jul 10 00:24:55.523981 containerd[1742]: time="2025-07-10T00:24:55.523648048Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:24:55.526687 containerd[1742]: time="2025-07-10T00:24:55.526653950Z" level=info msg="Container a9ade945640df2da6f4aa45f2b8663960d01a370932acff95e42944e8420ab23: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:55.542313 containerd[1742]: time="2025-07-10T00:24:55.542291246Z" level=info msg="CreateContainer within sandbox \"b2018b36940e95e701376d489da8a1f84f372bcc8c0ed3c4829a6472bbcf739d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a9ade945640df2da6f4aa45f2b8663960d01a370932acff95e42944e8420ab23\"" Jul 10 00:24:55.542647 containerd[1742]: time="2025-07-10T00:24:55.542593071Z" level=info msg="StartContainer for \"a9ade945640df2da6f4aa45f2b8663960d01a370932acff95e42944e8420ab23\"" Jul 10 00:24:55.544004 containerd[1742]: time="2025-07-10T00:24:55.543974592Z" level=info msg="connecting to shim a9ade945640df2da6f4aa45f2b8663960d01a370932acff95e42944e8420ab23" address="unix:///run/containerd/s/bb06223347b6932c4529a21fcd51417c96a934240ea1085d46adc3d93ad0318a" protocol=ttrpc version=3 Jul 10 00:24:55.557823 systemd[1]: Started cri-containerd-a9ade945640df2da6f4aa45f2b8663960d01a370932acff95e42944e8420ab23.scope - libcontainer container a9ade945640df2da6f4aa45f2b8663960d01a370932acff95e42944e8420ab23. Jul 10 00:24:55.585057 containerd[1742]: time="2025-07-10T00:24:55.585029530Z" level=info msg="StartContainer for \"a9ade945640df2da6f4aa45f2b8663960d01a370932acff95e42944e8420ab23\" returns successfully" Jul 10 00:24:55.641507 containerd[1742]: time="2025-07-10T00:24:55.641485627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-467dh,Uid:9d5b6c75-cd14-4efd-9590-8c60684cd143,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:55.688356 containerd[1742]: time="2025-07-10T00:24:55.688299037Z" level=info msg="connecting to shim 907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c" address="unix:///run/containerd/s/55dd748a255d36c0632bd896c5e3a9856deef038585e71ac34578ee15055f115" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:55.707809 systemd[1]: Started cri-containerd-907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c.scope - libcontainer container 907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c. Jul 10 00:24:55.746011 containerd[1742]: time="2025-07-10T00:24:55.745993014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-467dh,Uid:9d5b6c75-cd14-4efd-9590-8c60684cd143,Namespace:kube-system,Attempt:0,} returns sandbox id \"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\"" Jul 10 00:24:55.874677 kubelet[3162]: I0710 00:24:55.874226 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-psr8z" podStartSLOduration=0.874208597 podStartE2EDuration="874.208597ms" podCreationTimestamp="2025-07-10 00:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:24:55.873444939 +0000 UTC m=+6.132831258" watchObservedRunningTime="2025-07-10 00:24:55.874208597 +0000 UTC m=+6.133594916" Jul 10 00:24:59.713421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868228804.mount: Deactivated successfully. Jul 10 00:25:01.933979 containerd[1742]: time="2025-07-10T00:25:01.933929545Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:01.936146 containerd[1742]: time="2025-07-10T00:25:01.936108308Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 00:25:01.938580 containerd[1742]: time="2025-07-10T00:25:01.938543231Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:01.939600 containerd[1742]: time="2025-07-10T00:25:01.939477354Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.415619855s" Jul 10 00:25:01.939600 containerd[1742]: time="2025-07-10T00:25:01.939507605Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:25:01.942037 containerd[1742]: time="2025-07-10T00:25:01.942000068Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:25:01.946911 containerd[1742]: time="2025-07-10T00:25:01.946885578Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:25:01.967493 containerd[1742]: time="2025-07-10T00:25:01.966867501Z" level=info msg="Container 6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:01.982417 containerd[1742]: time="2025-07-10T00:25:01.982394264Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\"" Jul 10 00:25:01.982960 containerd[1742]: time="2025-07-10T00:25:01.982931600Z" level=info msg="StartContainer for \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\"" Jul 10 00:25:01.983673 containerd[1742]: time="2025-07-10T00:25:01.983652263Z" level=info msg="connecting to shim 6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d" address="unix:///run/containerd/s/2a619d4f5565d421e68f605cdcf812a249e211900b0fee3f08d42395c0bc0ab8" protocol=ttrpc version=3 Jul 10 00:25:02.006850 systemd[1]: Started cri-containerd-6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d.scope - libcontainer container 6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d. Jul 10 00:25:02.035920 containerd[1742]: time="2025-07-10T00:25:02.035899513Z" level=info msg="StartContainer for \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\" returns successfully" Jul 10 00:25:02.044076 systemd[1]: cri-containerd-6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d.scope: Deactivated successfully. Jul 10 00:25:02.047037 containerd[1742]: time="2025-07-10T00:25:02.047015362Z" level=info msg="received exit event container_id:\"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\" id:\"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\" pid:3581 exited_at:{seconds:1752107102 nanos:46754155}" Jul 10 00:25:02.047224 containerd[1742]: time="2025-07-10T00:25:02.047038628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\" id:\"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\" pid:3581 exited_at:{seconds:1752107102 nanos:46754155}" Jul 10 00:25:02.061839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d-rootfs.mount: Deactivated successfully. Jul 10 00:25:05.905463 containerd[1742]: time="2025-07-10T00:25:05.905424430Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:25:05.943178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442661073.mount: Deactivated successfully. Jul 10 00:25:05.953564 containerd[1742]: time="2025-07-10T00:25:05.952832711Z" level=info msg="Container 5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:05.977549 containerd[1742]: time="2025-07-10T00:25:05.977518811Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\"" Jul 10 00:25:05.978263 containerd[1742]: time="2025-07-10T00:25:05.978242389Z" level=info msg="StartContainer for \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\"" Jul 10 00:25:05.979232 containerd[1742]: time="2025-07-10T00:25:05.979155985Z" level=info msg="connecting to shim 5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e" address="unix:///run/containerd/s/2a619d4f5565d421e68f605cdcf812a249e211900b0fee3f08d42395c0bc0ab8" protocol=ttrpc version=3 Jul 10 00:25:06.000853 systemd[1]: Started cri-containerd-5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e.scope - libcontainer container 5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e. Jul 10 00:25:06.030485 containerd[1742]: time="2025-07-10T00:25:06.030422539Z" level=info msg="StartContainer for \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\" returns successfully" Jul 10 00:25:06.042360 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:25:06.042610 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:06.043197 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:25:06.047001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:25:06.050010 systemd[1]: cri-containerd-5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e.scope: Deactivated successfully. Jul 10 00:25:06.051158 containerd[1742]: time="2025-07-10T00:25:06.051057066Z" level=info msg="received exit event container_id:\"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\" id:\"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\" pid:3636 exited_at:{seconds:1752107106 nanos:50766305}" Jul 10 00:25:06.051803 containerd[1742]: time="2025-07-10T00:25:06.051458049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\" id:\"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\" pid:3636 exited_at:{seconds:1752107106 nanos:50766305}" Jul 10 00:25:06.073086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:06.384983 containerd[1742]: time="2025-07-10T00:25:06.384669102Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:06.389043 containerd[1742]: time="2025-07-10T00:25:06.388975764Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 00:25:06.396495 containerd[1742]: time="2025-07-10T00:25:06.396433428Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:06.397295 containerd[1742]: time="2025-07-10T00:25:06.397214330Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.455011234s" Jul 10 00:25:06.397295 containerd[1742]: time="2025-07-10T00:25:06.397242750Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:25:06.403877 containerd[1742]: time="2025-07-10T00:25:06.403852112Z" level=info msg="CreateContainer within sandbox \"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:25:06.415984 containerd[1742]: time="2025-07-10T00:25:06.415958251Z" level=info msg="Container d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:06.427891 containerd[1742]: time="2025-07-10T00:25:06.427866627Z" level=info msg="CreateContainer within sandbox \"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\"" Jul 10 00:25:06.429249 containerd[1742]: time="2025-07-10T00:25:06.428285169Z" level=info msg="StartContainer for \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\"" Jul 10 00:25:06.429249 containerd[1742]: time="2025-07-10T00:25:06.429011990Z" level=info msg="connecting to shim d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200" address="unix:///run/containerd/s/55dd748a255d36c0632bd896c5e3a9856deef038585e71ac34578ee15055f115" protocol=ttrpc version=3 Jul 10 00:25:06.446852 systemd[1]: Started cri-containerd-d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200.scope - libcontainer container d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200. Jul 10 00:25:06.473844 containerd[1742]: time="2025-07-10T00:25:06.473815185Z" level=info msg="StartContainer for \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" returns successfully" Jul 10 00:25:06.891272 containerd[1742]: time="2025-07-10T00:25:06.891235721Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:25:06.895720 kubelet[3162]: I0710 00:25:06.895659 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-467dh" podStartSLOduration=1.244625956 podStartE2EDuration="11.895630873s" podCreationTimestamp="2025-07-10 00:24:55 +0000 UTC" firstStartedPulling="2025-07-10 00:24:55.746823444 +0000 UTC m=+6.006209748" lastFinishedPulling="2025-07-10 00:25:06.397828347 +0000 UTC m=+16.657214665" observedRunningTime="2025-07-10 00:25:06.895415902 +0000 UTC m=+17.154802218" watchObservedRunningTime="2025-07-10 00:25:06.895630873 +0000 UTC m=+17.155017191" Jul 10 00:25:06.914144 containerd[1742]: time="2025-07-10T00:25:06.914115179Z" level=info msg="Container da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:06.929518 containerd[1742]: time="2025-07-10T00:25:06.929482600Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\"" Jul 10 00:25:06.929976 containerd[1742]: time="2025-07-10T00:25:06.929954708Z" level=info msg="StartContainer for \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\"" Jul 10 00:25:06.932578 containerd[1742]: time="2025-07-10T00:25:06.932547258Z" level=info msg="connecting to shim da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80" address="unix:///run/containerd/s/2a619d4f5565d421e68f605cdcf812a249e211900b0fee3f08d42395c0bc0ab8" protocol=ttrpc version=3 Jul 10 00:25:06.943487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e-rootfs.mount: Deactivated successfully. Jul 10 00:25:06.962934 systemd[1]: Started cri-containerd-da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80.scope - libcontainer container da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80. Jul 10 00:25:07.041809 systemd[1]: cri-containerd-da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80.scope: Deactivated successfully. Jul 10 00:25:07.046969 containerd[1742]: time="2025-07-10T00:25:07.046767056Z" level=info msg="StartContainer for \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\" returns successfully" Jul 10 00:25:07.047727 containerd[1742]: time="2025-07-10T00:25:07.047655551Z" level=info msg="received exit event container_id:\"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\" id:\"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\" pid:3723 exited_at:{seconds:1752107107 nanos:46893956}" Jul 10 00:25:07.047910 containerd[1742]: time="2025-07-10T00:25:07.047837971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\" id:\"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\" pid:3723 exited_at:{seconds:1752107107 nanos:46893956}" Jul 10 00:25:07.080840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80-rootfs.mount: Deactivated successfully. Jul 10 00:25:07.895405 containerd[1742]: time="2025-07-10T00:25:07.895364643Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:25:07.913919 containerd[1742]: time="2025-07-10T00:25:07.912197782Z" level=info msg="Container e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:07.928286 containerd[1742]: time="2025-07-10T00:25:07.928254340Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\"" Jul 10 00:25:07.929448 containerd[1742]: time="2025-07-10T00:25:07.929421627Z" level=info msg="StartContainer for \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\"" Jul 10 00:25:07.930515 containerd[1742]: time="2025-07-10T00:25:07.930412513Z" level=info msg="connecting to shim e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf" address="unix:///run/containerd/s/2a619d4f5565d421e68f605cdcf812a249e211900b0fee3f08d42395c0bc0ab8" protocol=ttrpc version=3 Jul 10 00:25:07.959838 systemd[1]: Started cri-containerd-e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf.scope - libcontainer container e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf. Jul 10 00:25:07.978644 systemd[1]: cri-containerd-e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf.scope: Deactivated successfully. Jul 10 00:25:07.979598 containerd[1742]: time="2025-07-10T00:25:07.979573574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\" id:\"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\" pid:3765 exited_at:{seconds:1752107107 nanos:978955322}" Jul 10 00:25:07.982709 containerd[1742]: time="2025-07-10T00:25:07.982605772Z" level=info msg="received exit event container_id:\"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\" id:\"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\" pid:3765 exited_at:{seconds:1752107107 nanos:978955322}" Jul 10 00:25:07.987952 containerd[1742]: time="2025-07-10T00:25:07.987931493Z" level=info msg="StartContainer for \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\" returns successfully" Jul 10 00:25:07.996687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf-rootfs.mount: Deactivated successfully. Jul 10 00:25:08.902089 containerd[1742]: time="2025-07-10T00:25:08.902055276Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:25:08.931759 containerd[1742]: time="2025-07-10T00:25:08.926236525Z" level=info msg="Container 609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:08.948240 containerd[1742]: time="2025-07-10T00:25:08.948205208Z" level=info msg="CreateContainer within sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\"" Jul 10 00:25:08.950718 containerd[1742]: time="2025-07-10T00:25:08.950614364Z" level=info msg="StartContainer for \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\"" Jul 10 00:25:08.951564 containerd[1742]: time="2025-07-10T00:25:08.951511751Z" level=info msg="connecting to shim 609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec" address="unix:///run/containerd/s/2a619d4f5565d421e68f605cdcf812a249e211900b0fee3f08d42395c0bc0ab8" protocol=ttrpc version=3 Jul 10 00:25:08.982833 systemd[1]: Started cri-containerd-609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec.scope - libcontainer container 609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec. Jul 10 00:25:09.021178 containerd[1742]: time="2025-07-10T00:25:09.021156284Z" level=info msg="StartContainer for \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" returns successfully" Jul 10 00:25:09.089688 containerd[1742]: time="2025-07-10T00:25:09.089656224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" id:\"c4328cf3197566ccdcf6d9bad7d23226698a29997a790a244c1ff19dcb028c86\" pid:3834 exited_at:{seconds:1752107109 nanos:89430138}" Jul 10 00:25:09.146806 kubelet[3162]: I0710 00:25:09.146774 3162 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:25:09.194766 systemd[1]: Created slice kubepods-burstable-pod7c222345_25a8_4d69_b7a8_0bfb78c1d90b.slice - libcontainer container kubepods-burstable-pod7c222345_25a8_4d69_b7a8_0bfb78c1d90b.slice. Jul 10 00:25:09.206744 systemd[1]: Created slice kubepods-burstable-pode62c5a9b_fa8f_42f5_ad7b_994dede3874c.slice - libcontainer container kubepods-burstable-pode62c5a9b_fa8f_42f5_ad7b_994dede3874c.slice. Jul 10 00:25:09.249157 kubelet[3162]: I0710 00:25:09.249117 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbqz\" (UniqueName: \"kubernetes.io/projected/e62c5a9b-fa8f-42f5-ad7b-994dede3874c-kube-api-access-zzbqz\") pod \"coredns-674b8bbfcf-p5jvv\" (UID: \"e62c5a9b-fa8f-42f5-ad7b-994dede3874c\") " pod="kube-system/coredns-674b8bbfcf-p5jvv" Jul 10 00:25:09.249421 kubelet[3162]: I0710 00:25:09.249268 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e62c5a9b-fa8f-42f5-ad7b-994dede3874c-config-volume\") pod \"coredns-674b8bbfcf-p5jvv\" (UID: \"e62c5a9b-fa8f-42f5-ad7b-994dede3874c\") " pod="kube-system/coredns-674b8bbfcf-p5jvv" Jul 10 00:25:09.249421 kubelet[3162]: I0710 00:25:09.249291 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-266m9\" (UniqueName: \"kubernetes.io/projected/7c222345-25a8-4d69-b7a8-0bfb78c1d90b-kube-api-access-266m9\") pod \"coredns-674b8bbfcf-sqm6r\" (UID: \"7c222345-25a8-4d69-b7a8-0bfb78c1d90b\") " pod="kube-system/coredns-674b8bbfcf-sqm6r" Jul 10 00:25:09.249421 kubelet[3162]: I0710 00:25:09.249318 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c222345-25a8-4d69-b7a8-0bfb78c1d90b-config-volume\") pod \"coredns-674b8bbfcf-sqm6r\" (UID: \"7c222345-25a8-4d69-b7a8-0bfb78c1d90b\") " pod="kube-system/coredns-674b8bbfcf-sqm6r" Jul 10 00:25:09.502908 containerd[1742]: time="2025-07-10T00:25:09.502589909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sqm6r,Uid:7c222345-25a8-4d69-b7a8-0bfb78c1d90b,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:09.511145 containerd[1742]: time="2025-07-10T00:25:09.511113815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p5jvv,Uid:e62c5a9b-fa8f-42f5-ad7b-994dede3874c,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:09.922131 kubelet[3162]: I0710 00:25:09.921978 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9jrsn" podStartSLOduration=8.504649443 podStartE2EDuration="14.921960465s" podCreationTimestamp="2025-07-10 00:24:55 +0000 UTC" firstStartedPulling="2025-07-10 00:24:55.523007749 +0000 UTC m=+5.782394062" lastFinishedPulling="2025-07-10 00:25:01.940318769 +0000 UTC m=+12.199705084" observedRunningTime="2025-07-10 00:25:09.918880487 +0000 UTC m=+20.178266811" watchObservedRunningTime="2025-07-10 00:25:09.921960465 +0000 UTC m=+20.181346786" Jul 10 00:25:11.011411 systemd-networkd[1362]: cilium_host: Link UP Jul 10 00:25:11.011512 systemd-networkd[1362]: cilium_net: Link UP Jul 10 00:25:11.011616 systemd-networkd[1362]: cilium_net: Gained carrier Jul 10 00:25:11.012853 systemd-networkd[1362]: cilium_host: Gained carrier Jul 10 00:25:11.017828 systemd-networkd[1362]: cilium_host: Gained IPv6LL Jul 10 00:25:11.087775 systemd-networkd[1362]: cilium_net: Gained IPv6LL Jul 10 00:25:11.138914 systemd-networkd[1362]: cilium_vxlan: Link UP Jul 10 00:25:11.138919 systemd-networkd[1362]: cilium_vxlan: Gained carrier Jul 10 00:25:11.309733 kernel: NET: Registered PF_ALG protocol family Jul 10 00:25:11.804074 systemd-networkd[1362]: lxc_health: Link UP Jul 10 00:25:11.812179 systemd-networkd[1362]: lxc_health: Gained carrier Jul 10 00:25:12.040518 systemd-networkd[1362]: lxcb221f7ef6eb8: Link UP Jul 10 00:25:12.048769 kernel: eth0: renamed from tmpa6d73 Jul 10 00:25:12.052166 systemd-networkd[1362]: lxcb221f7ef6eb8: Gained carrier Jul 10 00:25:12.062837 kernel: eth0: renamed from tmp2361b Jul 10 00:25:12.065936 systemd-networkd[1362]: lxc72750ec4a469: Link UP Jul 10 00:25:12.068790 systemd-networkd[1362]: lxc72750ec4a469: Gained carrier Jul 10 00:25:12.199829 systemd-networkd[1362]: cilium_vxlan: Gained IPv6LL Jul 10 00:25:12.903961 systemd-networkd[1362]: lxc_health: Gained IPv6LL Jul 10 00:25:13.735955 systemd-networkd[1362]: lxcb221f7ef6eb8: Gained IPv6LL Jul 10 00:25:13.927839 systemd-networkd[1362]: lxc72750ec4a469: Gained IPv6LL Jul 10 00:25:14.939003 containerd[1742]: time="2025-07-10T00:25:14.938962470Z" level=info msg="connecting to shim a6d734359295891a03edd02c0b8dbbddf367a4b701975580166396c2c5b466ea" address="unix:///run/containerd/s/1bead9621c9956d9187da22f2dbf92f6fd5551157da661bf7bc61910eba87129" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:14.944864 containerd[1742]: time="2025-07-10T00:25:14.944051209Z" level=info msg="connecting to shim 2361b38c6aea3f2bf570a8c5f022f45fc4a516fea97ddb7e9d3e467b81b7af82" address="unix:///run/containerd/s/459bec252c7d5fee29ed7f8994bc56308f465d7c6e6a04a488e7971e631c127f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:14.978835 systemd[1]: Started cri-containerd-a6d734359295891a03edd02c0b8dbbddf367a4b701975580166396c2c5b466ea.scope - libcontainer container a6d734359295891a03edd02c0b8dbbddf367a4b701975580166396c2c5b466ea. Jul 10 00:25:14.981839 systemd[1]: Started cri-containerd-2361b38c6aea3f2bf570a8c5f022f45fc4a516fea97ddb7e9d3e467b81b7af82.scope - libcontainer container 2361b38c6aea3f2bf570a8c5f022f45fc4a516fea97ddb7e9d3e467b81b7af82. Jul 10 00:25:15.029441 containerd[1742]: time="2025-07-10T00:25:15.029410781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sqm6r,Uid:7c222345-25a8-4d69-b7a8-0bfb78c1d90b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6d734359295891a03edd02c0b8dbbddf367a4b701975580166396c2c5b466ea\"" Jul 10 00:25:15.037865 containerd[1742]: time="2025-07-10T00:25:15.037834961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p5jvv,Uid:e62c5a9b-fa8f-42f5-ad7b-994dede3874c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2361b38c6aea3f2bf570a8c5f022f45fc4a516fea97ddb7e9d3e467b81b7af82\"" Jul 10 00:25:15.039615 containerd[1742]: time="2025-07-10T00:25:15.039596649Z" level=info msg="CreateContainer within sandbox \"a6d734359295891a03edd02c0b8dbbddf367a4b701975580166396c2c5b466ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:25:15.045695 containerd[1742]: time="2025-07-10T00:25:15.045121792Z" level=info msg="CreateContainer within sandbox \"2361b38c6aea3f2bf570a8c5f022f45fc4a516fea97ddb7e9d3e467b81b7af82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:25:15.062006 containerd[1742]: time="2025-07-10T00:25:15.061983238Z" level=info msg="Container 50acbf4473adf24d40db4142621f68f39f24829e140493c910e1c86ea9137b6f: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:15.064293 containerd[1742]: time="2025-07-10T00:25:15.064268858Z" level=info msg="Container 6aadef29b7fdc7391efe9a578291b2e02a9b89ab722e39fb0611d649337e8ad3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:15.079791 containerd[1742]: time="2025-07-10T00:25:15.079769085Z" level=info msg="CreateContainer within sandbox \"a6d734359295891a03edd02c0b8dbbddf367a4b701975580166396c2c5b466ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50acbf4473adf24d40db4142621f68f39f24829e140493c910e1c86ea9137b6f\"" Jul 10 00:25:15.080106 containerd[1742]: time="2025-07-10T00:25:15.080090734Z" level=info msg="StartContainer for \"50acbf4473adf24d40db4142621f68f39f24829e140493c910e1c86ea9137b6f\"" Jul 10 00:25:15.080890 containerd[1742]: time="2025-07-10T00:25:15.080839034Z" level=info msg="connecting to shim 50acbf4473adf24d40db4142621f68f39f24829e140493c910e1c86ea9137b6f" address="unix:///run/containerd/s/1bead9621c9956d9187da22f2dbf92f6fd5551157da661bf7bc61910eba87129" protocol=ttrpc version=3 Jul 10 00:25:15.085315 containerd[1742]: time="2025-07-10T00:25:15.085286214Z" level=info msg="CreateContainer within sandbox \"2361b38c6aea3f2bf570a8c5f022f45fc4a516fea97ddb7e9d3e467b81b7af82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6aadef29b7fdc7391efe9a578291b2e02a9b89ab722e39fb0611d649337e8ad3\"" Jul 10 00:25:15.085719 containerd[1742]: time="2025-07-10T00:25:15.085664552Z" level=info msg="StartContainer for \"6aadef29b7fdc7391efe9a578291b2e02a9b89ab722e39fb0611d649337e8ad3\"" Jul 10 00:25:15.086349 containerd[1742]: time="2025-07-10T00:25:15.086316492Z" level=info msg="connecting to shim 6aadef29b7fdc7391efe9a578291b2e02a9b89ab722e39fb0611d649337e8ad3" address="unix:///run/containerd/s/459bec252c7d5fee29ed7f8994bc56308f465d7c6e6a04a488e7971e631c127f" protocol=ttrpc version=3 Jul 10 00:25:15.102872 systemd[1]: Started cri-containerd-50acbf4473adf24d40db4142621f68f39f24829e140493c910e1c86ea9137b6f.scope - libcontainer container 50acbf4473adf24d40db4142621f68f39f24829e140493c910e1c86ea9137b6f. Jul 10 00:25:15.106875 systemd[1]: Started cri-containerd-6aadef29b7fdc7391efe9a578291b2e02a9b89ab722e39fb0611d649337e8ad3.scope - libcontainer container 6aadef29b7fdc7391efe9a578291b2e02a9b89ab722e39fb0611d649337e8ad3. Jul 10 00:25:15.139423 containerd[1742]: time="2025-07-10T00:25:15.139008918Z" level=info msg="StartContainer for \"50acbf4473adf24d40db4142621f68f39f24829e140493c910e1c86ea9137b6f\" returns successfully" Jul 10 00:25:15.148343 containerd[1742]: time="2025-07-10T00:25:15.148006373Z" level=info msg="StartContainer for \"6aadef29b7fdc7391efe9a578291b2e02a9b89ab722e39fb0611d649337e8ad3\" returns successfully" Jul 10 00:25:15.954099 kubelet[3162]: I0710 00:25:15.953899 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sqm6r" podStartSLOduration=20.953823852 podStartE2EDuration="20.953823852s" podCreationTimestamp="2025-07-10 00:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:15.953395735 +0000 UTC m=+26.212782054" watchObservedRunningTime="2025-07-10 00:25:15.953823852 +0000 UTC m=+26.213210171" Jul 10 00:25:15.955848 kubelet[3162]: I0710 00:25:15.955688 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p5jvv" podStartSLOduration=20.955673592 podStartE2EDuration="20.955673592s" podCreationTimestamp="2025-07-10 00:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:15.935795152 +0000 UTC m=+26.195181476" watchObservedRunningTime="2025-07-10 00:25:15.955673592 +0000 UTC m=+26.215059915" Jul 10 00:26:28.012884 systemd[1]: Started sshd@7-10.200.8.5:22-10.200.16.10:54112.service - OpenSSH per-connection server daemon (10.200.16.10:54112). Jul 10 00:26:28.638196 sshd[4488]: Accepted publickey for core from 10.200.16.10 port 54112 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:28.639342 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:28.643385 systemd-logind[1709]: New session 10 of user core. Jul 10 00:26:28.647885 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:26:29.229249 sshd[4490]: Connection closed by 10.200.16.10 port 54112 Jul 10 00:26:29.233492 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:29.236290 systemd[1]: sshd@7-10.200.8.5:22-10.200.16.10:54112.service: Deactivated successfully. Jul 10 00:26:29.237988 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:26:29.238782 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:26:29.239963 systemd-logind[1709]: Removed session 10. Jul 10 00:26:34.344347 systemd[1]: Started sshd@8-10.200.8.5:22-10.200.16.10:41182.service - OpenSSH per-connection server daemon (10.200.16.10:41182). Jul 10 00:26:34.971672 sshd[4517]: Accepted publickey for core from 10.200.16.10 port 41182 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:34.972899 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:34.976743 systemd-logind[1709]: New session 11 of user core. Jul 10 00:26:34.982840 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:26:35.461010 sshd[4519]: Connection closed by 10.200.16.10 port 41182 Jul 10 00:26:35.461735 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:35.464914 systemd[1]: sshd@8-10.200.8.5:22-10.200.16.10:41182.service: Deactivated successfully. Jul 10 00:26:35.466658 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:26:35.467398 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:26:35.468596 systemd-logind[1709]: Removed session 11. Jul 10 00:26:40.575401 systemd[1]: Started sshd@9-10.200.8.5:22-10.200.16.10:43400.service - OpenSSH per-connection server daemon (10.200.16.10:43400). Jul 10 00:26:41.199488 sshd[4532]: Accepted publickey for core from 10.200.16.10 port 43400 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:41.200847 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:41.205766 systemd-logind[1709]: New session 12 of user core. Jul 10 00:26:41.209880 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:26:41.689647 sshd[4534]: Connection closed by 10.200.16.10 port 43400 Jul 10 00:26:41.690188 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:41.693362 systemd[1]: sshd@9-10.200.8.5:22-10.200.16.10:43400.service: Deactivated successfully. Jul 10 00:26:41.695085 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:26:41.695917 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:26:41.697122 systemd-logind[1709]: Removed session 12. Jul 10 00:26:46.812555 systemd[1]: Started sshd@10-10.200.8.5:22-10.200.16.10:43416.service - OpenSSH per-connection server daemon (10.200.16.10:43416). Jul 10 00:26:47.444386 sshd[4547]: Accepted publickey for core from 10.200.16.10 port 43416 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:47.445519 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:47.449782 systemd-logind[1709]: New session 13 of user core. Jul 10 00:26:47.452881 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:26:47.936987 sshd[4549]: Connection closed by 10.200.16.10 port 43416 Jul 10 00:26:47.937457 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:47.939915 systemd[1]: sshd@10-10.200.8.5:22-10.200.16.10:43416.service: Deactivated successfully. Jul 10 00:26:47.941513 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:26:47.943223 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:26:47.944131 systemd-logind[1709]: Removed session 13. Jul 10 00:26:48.055380 systemd[1]: Started sshd@11-10.200.8.5:22-10.200.16.10:43424.service - OpenSSH per-connection server daemon (10.200.16.10:43424). Jul 10 00:26:48.682073 sshd[4562]: Accepted publickey for core from 10.200.16.10 port 43424 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:48.683311 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:48.687834 systemd-logind[1709]: New session 14 of user core. Jul 10 00:26:48.690904 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:26:49.209854 sshd[4564]: Connection closed by 10.200.16.10 port 43424 Jul 10 00:26:49.210392 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:49.213861 systemd[1]: sshd@11-10.200.8.5:22-10.200.16.10:43424.service: Deactivated successfully. Jul 10 00:26:49.215577 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:26:49.216381 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:26:49.217559 systemd-logind[1709]: Removed session 14. Jul 10 00:26:49.323474 systemd[1]: Started sshd@12-10.200.8.5:22-10.200.16.10:43430.service - OpenSSH per-connection server daemon (10.200.16.10:43430). Jul 10 00:26:49.950290 sshd[4574]: Accepted publickey for core from 10.200.16.10 port 43430 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:49.951389 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:49.955563 systemd-logind[1709]: New session 15 of user core. Jul 10 00:26:49.962864 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:26:50.441985 sshd[4578]: Connection closed by 10.200.16.10 port 43430 Jul 10 00:26:50.442488 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:50.445586 systemd[1]: sshd@12-10.200.8.5:22-10.200.16.10:43430.service: Deactivated successfully. Jul 10 00:26:50.447306 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:26:50.448265 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:26:50.449523 systemd-logind[1709]: Removed session 15. Jul 10 00:26:55.562945 systemd[1]: Started sshd@13-10.200.8.5:22-10.200.16.10:57090.service - OpenSSH per-connection server daemon (10.200.16.10:57090). Jul 10 00:26:56.194100 sshd[4590]: Accepted publickey for core from 10.200.16.10 port 57090 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:56.195269 sshd-session[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:56.199642 systemd-logind[1709]: New session 16 of user core. Jul 10 00:26:56.203867 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:26:56.679843 sshd[4594]: Connection closed by 10.200.16.10 port 57090 Jul 10 00:26:56.680460 sshd-session[4590]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:56.683079 systemd[1]: sshd@13-10.200.8.5:22-10.200.16.10:57090.service: Deactivated successfully. Jul 10 00:26:56.684759 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:26:56.685942 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:26:56.687146 systemd-logind[1709]: Removed session 16. Jul 10 00:26:56.791319 systemd[1]: Started sshd@14-10.200.8.5:22-10.200.16.10:57106.service - OpenSSH per-connection server daemon (10.200.16.10:57106). Jul 10 00:26:57.425116 sshd[4606]: Accepted publickey for core from 10.200.16.10 port 57106 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:57.426288 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:57.430772 systemd-logind[1709]: New session 17 of user core. Jul 10 00:26:57.440847 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:26:57.967500 sshd[4608]: Connection closed by 10.200.16.10 port 57106 Jul 10 00:26:57.967964 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:57.971086 systemd[1]: sshd@14-10.200.8.5:22-10.200.16.10:57106.service: Deactivated successfully. Jul 10 00:26:57.972902 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:26:57.973689 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:26:57.975087 systemd-logind[1709]: Removed session 17. Jul 10 00:26:58.080459 systemd[1]: Started sshd@15-10.200.8.5:22-10.200.16.10:57120.service - OpenSSH per-connection server daemon (10.200.16.10:57120). Jul 10 00:26:58.706774 sshd[4618]: Accepted publickey for core from 10.200.16.10 port 57120 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:58.708029 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:58.712410 systemd-logind[1709]: New session 18 of user core. Jul 10 00:26:58.716869 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:26:59.952761 sshd[4620]: Connection closed by 10.200.16.10 port 57120 Jul 10 00:26:59.953294 sshd-session[4618]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:59.956460 systemd[1]: sshd@15-10.200.8.5:22-10.200.16.10:57120.service: Deactivated successfully. Jul 10 00:26:59.958177 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:26:59.959095 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:26:59.960288 systemd-logind[1709]: Removed session 18. Jul 10 00:27:00.066734 systemd[1]: Started sshd@16-10.200.8.5:22-10.200.16.10:57186.service - OpenSSH per-connection server daemon (10.200.16.10:57186). Jul 10 00:27:00.693367 sshd[4638]: Accepted publickey for core from 10.200.16.10 port 57186 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:00.694582 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:00.698846 systemd-logind[1709]: New session 19 of user core. Jul 10 00:27:00.703861 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:27:01.262658 sshd[4640]: Connection closed by 10.200.16.10 port 57186 Jul 10 00:27:01.263219 sshd-session[4638]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:01.265513 systemd[1]: sshd@16-10.200.8.5:22-10.200.16.10:57186.service: Deactivated successfully. Jul 10 00:27:01.267126 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:27:01.268503 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:27:01.270012 systemd-logind[1709]: Removed session 19. Jul 10 00:27:01.378745 systemd[1]: Started sshd@17-10.200.8.5:22-10.200.16.10:57196.service - OpenSSH per-connection server daemon (10.200.16.10:57196). Jul 10 00:27:02.008797 sshd[4649]: Accepted publickey for core from 10.200.16.10 port 57196 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:02.010114 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:02.014676 systemd-logind[1709]: New session 20 of user core. Jul 10 00:27:02.017851 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:27:02.495430 sshd[4651]: Connection closed by 10.200.16.10 port 57196 Jul 10 00:27:02.496000 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:02.499065 systemd[1]: sshd@17-10.200.8.5:22-10.200.16.10:57196.service: Deactivated successfully. Jul 10 00:27:02.500807 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:27:02.501481 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:27:02.502960 systemd-logind[1709]: Removed session 20. Jul 10 00:27:07.613717 systemd[1]: Started sshd@18-10.200.8.5:22-10.200.16.10:57202.service - OpenSSH per-connection server daemon (10.200.16.10:57202). Jul 10 00:27:08.239365 sshd[4665]: Accepted publickey for core from 10.200.16.10 port 57202 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:08.240856 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:08.245664 systemd-logind[1709]: New session 21 of user core. Jul 10 00:27:08.253821 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:27:08.723232 sshd[4667]: Connection closed by 10.200.16.10 port 57202 Jul 10 00:27:08.723754 sshd-session[4665]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:08.727320 systemd[1]: sshd@18-10.200.8.5:22-10.200.16.10:57202.service: Deactivated successfully. Jul 10 00:27:08.729827 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:27:08.730607 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:27:08.731933 systemd-logind[1709]: Removed session 21. Jul 10 00:27:13.839644 systemd[1]: Started sshd@19-10.200.8.5:22-10.200.16.10:43192.service - OpenSSH per-connection server daemon (10.200.16.10:43192). Jul 10 00:27:14.466141 sshd[4679]: Accepted publickey for core from 10.200.16.10 port 43192 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:14.467278 sshd-session[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:14.471761 systemd-logind[1709]: New session 22 of user core. Jul 10 00:27:14.475861 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:27:14.967969 sshd[4681]: Connection closed by 10.200.16.10 port 43192 Jul 10 00:27:14.968518 sshd-session[4679]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:14.971622 systemd[1]: sshd@19-10.200.8.5:22-10.200.16.10:43192.service: Deactivated successfully. Jul 10 00:27:14.973387 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:27:14.974258 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:27:14.975490 systemd-logind[1709]: Removed session 22. Jul 10 00:27:15.080958 systemd[1]: Started sshd@20-10.200.8.5:22-10.200.16.10:43208.service - OpenSSH per-connection server daemon (10.200.16.10:43208). Jul 10 00:27:15.714741 sshd[4693]: Accepted publickey for core from 10.200.16.10 port 43208 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:15.715886 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:15.719773 systemd-logind[1709]: New session 23 of user core. Jul 10 00:27:15.723900 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:27:17.339387 containerd[1742]: time="2025-07-10T00:27:17.339194805Z" level=info msg="StopContainer for \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" with timeout 30 (s)" Jul 10 00:27:17.340360 containerd[1742]: time="2025-07-10T00:27:17.339797425Z" level=info msg="Stop container \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" with signal terminated" Jul 10 00:27:17.352423 systemd[1]: cri-containerd-d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200.scope: Deactivated successfully. Jul 10 00:27:17.355301 containerd[1742]: time="2025-07-10T00:27:17.355212385Z" level=info msg="received exit event container_id:\"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" id:\"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" pid:3691 exited_at:{seconds:1752107237 nanos:354816552}" Jul 10 00:27:17.355483 containerd[1742]: time="2025-07-10T00:27:17.355409951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" id:\"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" pid:3691 exited_at:{seconds:1752107237 nanos:354816552}" Jul 10 00:27:17.366876 containerd[1742]: time="2025-07-10T00:27:17.366846383Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:27:17.372477 containerd[1742]: time="2025-07-10T00:27:17.372444013Z" level=info msg="TaskExit event in podsandbox handler container_id:\"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" id:\"9521f7816fb51a4f75ecf256ce5d32e428c876fbca264c34ea39267b00c04e89\" pid:4724 exited_at:{seconds:1752107237 nanos:372262189}" Jul 10 00:27:17.374510 containerd[1742]: time="2025-07-10T00:27:17.374490539Z" level=info msg="StopContainer for \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" with timeout 2 (s)" Jul 10 00:27:17.375004 containerd[1742]: time="2025-07-10T00:27:17.374922698Z" level=info msg="Stop container \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" with signal terminated" Jul 10 00:27:17.379356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200-rootfs.mount: Deactivated successfully. Jul 10 00:27:17.384824 systemd-networkd[1362]: lxc_health: Link DOWN Jul 10 00:27:17.384829 systemd-networkd[1362]: lxc_health: Lost carrier Jul 10 00:27:17.396094 systemd[1]: cri-containerd-609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec.scope: Deactivated successfully. Jul 10 00:27:17.396373 systemd[1]: cri-containerd-609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec.scope: Consumed 5.167s CPU time, 125.1M memory peak, 152K read from disk, 13.3M written to disk. Jul 10 00:27:17.396978 containerd[1742]: time="2025-07-10T00:27:17.396957875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" id:\"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" pid:3805 exited_at:{seconds:1752107237 nanos:396796063}" Jul 10 00:27:17.397232 containerd[1742]: time="2025-07-10T00:27:17.397146203Z" level=info msg="received exit event container_id:\"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" id:\"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" pid:3805 exited_at:{seconds:1752107237 nanos:396796063}" Jul 10 00:27:17.412320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec-rootfs.mount: Deactivated successfully. Jul 10 00:27:17.500022 containerd[1742]: time="2025-07-10T00:27:17.499994104Z" level=info msg="StopContainer for \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" returns successfully" Jul 10 00:27:17.500773 containerd[1742]: time="2025-07-10T00:27:17.500751060Z" level=info msg="StopPodSandbox for \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\"" Jul 10 00:27:17.500851 containerd[1742]: time="2025-07-10T00:27:17.500814716Z" level=info msg="Container to stop \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:17.500851 containerd[1742]: time="2025-07-10T00:27:17.500828061Z" level=info msg="Container to stop \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:17.500851 containerd[1742]: time="2025-07-10T00:27:17.500837505Z" level=info msg="Container to stop \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:17.500851 containerd[1742]: time="2025-07-10T00:27:17.500847241Z" level=info msg="Container to stop \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:17.500953 containerd[1742]: time="2025-07-10T00:27:17.500857239Z" level=info msg="Container to stop \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:17.504557 containerd[1742]: time="2025-07-10T00:27:17.504494369Z" level=info msg="StopContainer for \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" returns successfully" Jul 10 00:27:17.505275 containerd[1742]: time="2025-07-10T00:27:17.505021456Z" level=info msg="StopPodSandbox for \"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\"" Jul 10 00:27:17.505275 containerd[1742]: time="2025-07-10T00:27:17.505075671Z" level=info msg="Container to stop \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:17.508032 systemd[1]: cri-containerd-5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7.scope: Deactivated successfully. Jul 10 00:27:17.513160 systemd[1]: cri-containerd-907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c.scope: Deactivated successfully. Jul 10 00:27:17.514244 containerd[1742]: time="2025-07-10T00:27:17.514221093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" id:\"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" pid:3322 exit_status:137 exited_at:{seconds:1752107237 nanos:513719163}" Jul 10 00:27:17.537118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c-rootfs.mount: Deactivated successfully. Jul 10 00:27:17.542125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7-rootfs.mount: Deactivated successfully. Jul 10 00:27:17.552124 containerd[1742]: time="2025-07-10T00:27:17.552021289Z" level=info msg="shim disconnected" id=907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c namespace=k8s.io Jul 10 00:27:17.552359 containerd[1742]: time="2025-07-10T00:27:17.552293176Z" level=warning msg="cleaning up after shim disconnected" id=907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c namespace=k8s.io Jul 10 00:27:17.552847 containerd[1742]: time="2025-07-10T00:27:17.552311166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:27:17.552961 containerd[1742]: time="2025-07-10T00:27:17.552940312Z" level=info msg="shim disconnected" id=5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7 namespace=k8s.io Jul 10 00:27:17.552994 containerd[1742]: time="2025-07-10T00:27:17.552965305Z" level=warning msg="cleaning up after shim disconnected" id=5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7 namespace=k8s.io Jul 10 00:27:17.552994 containerd[1742]: time="2025-07-10T00:27:17.552975969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:27:17.564304 containerd[1742]: time="2025-07-10T00:27:17.564275643Z" level=info msg="received exit event sandbox_id:\"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" exit_status:137 exited_at:{seconds:1752107237 nanos:513719163}" Jul 10 00:27:17.564459 containerd[1742]: time="2025-07-10T00:27:17.564408586Z" level=info msg="received exit event sandbox_id:\"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\" exit_status:137 exited_at:{seconds:1752107237 nanos:515041905}" Jul 10 00:27:17.566513 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c-shm.mount: Deactivated successfully. Jul 10 00:27:17.566811 containerd[1742]: time="2025-07-10T00:27:17.566549304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\" id:\"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\" pid:3437 exit_status:137 exited_at:{seconds:1752107237 nanos:515041905}" Jul 10 00:27:17.566989 containerd[1742]: time="2025-07-10T00:27:17.566971672Z" level=info msg="TearDown network for sandbox \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" successfully" Jul 10 00:27:17.567113 containerd[1742]: time="2025-07-10T00:27:17.567022611Z" level=info msg="StopPodSandbox for \"5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7\" returns successfully" Jul 10 00:27:17.567356 containerd[1742]: time="2025-07-10T00:27:17.567342381Z" level=info msg="TearDown network for sandbox \"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\" successfully" Jul 10 00:27:17.567462 containerd[1742]: time="2025-07-10T00:27:17.567393321Z" level=info msg="StopPodSandbox for \"907fc45103b9b74715816c3a82343e8051f1692ff459cafc10c7b58a4d91a57c\" returns successfully" Jul 10 00:27:17.654337 kubelet[3162]: I0710 00:27:17.654306 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-cgroup\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.654337 kubelet[3162]: I0710 00:27:17.654342 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cni-path\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655766 kubelet[3162]: I0710 00:27:17.654358 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-xtables-lock\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655766 kubelet[3162]: I0710 00:27:17.654380 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-config-path\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655766 kubelet[3162]: I0710 00:27:17.654399 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7mfz\" (UniqueName: \"kubernetes.io/projected/de61c3b0-1008-4f61-b862-a166ebc14d6a-kube-api-access-h7mfz\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655766 kubelet[3162]: I0710 00:27:17.654416 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-552p8\" (UniqueName: \"kubernetes.io/projected/9d5b6c75-cd14-4efd-9590-8c60684cd143-kube-api-access-552p8\") pod \"9d5b6c75-cd14-4efd-9590-8c60684cd143\" (UID: \"9d5b6c75-cd14-4efd-9590-8c60684cd143\") " Jul 10 00:27:17.655766 kubelet[3162]: I0710 00:27:17.654435 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-run\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655766 kubelet[3162]: I0710 00:27:17.654453 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-bpf-maps\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655986 kubelet[3162]: I0710 00:27:17.654472 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d5b6c75-cd14-4efd-9590-8c60684cd143-cilium-config-path\") pod \"9d5b6c75-cd14-4efd-9590-8c60684cd143\" (UID: \"9d5b6c75-cd14-4efd-9590-8c60684cd143\") " Jul 10 00:27:17.655986 kubelet[3162]: I0710 00:27:17.654488 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-etc-cni-netd\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655986 kubelet[3162]: I0710 00:27:17.654504 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-host-proc-sys-kernel\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655986 kubelet[3162]: I0710 00:27:17.654522 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de61c3b0-1008-4f61-b862-a166ebc14d6a-hubble-tls\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655986 kubelet[3162]: I0710 00:27:17.654536 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-lib-modules\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.655986 kubelet[3162]: I0710 00:27:17.654555 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de61c3b0-1008-4f61-b862-a166ebc14d6a-clustermesh-secrets\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.656140 kubelet[3162]: I0710 00:27:17.654570 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-host-proc-sys-net\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.656140 kubelet[3162]: I0710 00:27:17.654585 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-hostproc\") pod \"de61c3b0-1008-4f61-b862-a166ebc14d6a\" (UID: \"de61c3b0-1008-4f61-b862-a166ebc14d6a\") " Jul 10 00:27:17.656140 kubelet[3162]: I0710 00:27:17.654645 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-hostproc" (OuterVolumeSpecName: "hostproc") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.656140 kubelet[3162]: I0710 00:27:17.654675 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.656140 kubelet[3162]: I0710 00:27:17.654689 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cni-path" (OuterVolumeSpecName: "cni-path") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.656255 kubelet[3162]: I0710 00:27:17.654722 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.656255 kubelet[3162]: I0710 00:27:17.654755 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.656773 kubelet[3162]: I0710 00:27:17.656743 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:27:17.656826 kubelet[3162]: I0710 00:27:17.656799 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.657861 kubelet[3162]: I0710 00:27:17.657834 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de61c3b0-1008-4f61-b862-a166ebc14d6a-kube-api-access-h7mfz" (OuterVolumeSpecName: "kube-api-access-h7mfz") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "kube-api-access-h7mfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:27:17.659530 kubelet[3162]: I0710 00:27:17.658854 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de61c3b0-1008-4f61-b862-a166ebc14d6a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:27:17.659530 kubelet[3162]: I0710 00:27:17.658895 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.660109 kubelet[3162]: I0710 00:27:17.660086 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5b6c75-cd14-4efd-9590-8c60684cd143-kube-api-access-552p8" (OuterVolumeSpecName: "kube-api-access-552p8") pod "9d5b6c75-cd14-4efd-9590-8c60684cd143" (UID: "9d5b6c75-cd14-4efd-9590-8c60684cd143"). InnerVolumeSpecName "kube-api-access-552p8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:27:17.660201 kubelet[3162]: I0710 00:27:17.660191 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.660254 kubelet[3162]: I0710 00:27:17.660246 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.661018 kubelet[3162]: I0710 00:27:17.660996 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de61c3b0-1008-4f61-b862-a166ebc14d6a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:27:17.661076 kubelet[3162]: I0710 00:27:17.661035 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de61c3b0-1008-4f61-b862-a166ebc14d6a" (UID: "de61c3b0-1008-4f61-b862-a166ebc14d6a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:17.661938 kubelet[3162]: I0710 00:27:17.661920 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5b6c75-cd14-4efd-9590-8c60684cd143-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d5b6c75-cd14-4efd-9590-8c60684cd143" (UID: "9d5b6c75-cd14-4efd-9590-8c60684cd143"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:27:17.755418 kubelet[3162]: I0710 00:27:17.755390 3162 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de61c3b0-1008-4f61-b862-a166ebc14d6a-clustermesh-secrets\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755418 kubelet[3162]: I0710 00:27:17.755415 3162 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-host-proc-sys-net\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755531 kubelet[3162]: I0710 00:27:17.755427 3162 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-hostproc\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755531 kubelet[3162]: I0710 00:27:17.755436 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-cgroup\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755531 kubelet[3162]: I0710 00:27:17.755448 3162 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cni-path\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755531 kubelet[3162]: I0710 00:27:17.755456 3162 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-xtables-lock\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755531 kubelet[3162]: I0710 00:27:17.755464 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-config-path\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755531 kubelet[3162]: I0710 00:27:17.755472 3162 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h7mfz\" (UniqueName: \"kubernetes.io/projected/de61c3b0-1008-4f61-b862-a166ebc14d6a-kube-api-access-h7mfz\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755531 kubelet[3162]: I0710 00:27:17.755482 3162 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-552p8\" (UniqueName: \"kubernetes.io/projected/9d5b6c75-cd14-4efd-9590-8c60684cd143-kube-api-access-552p8\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755531 kubelet[3162]: I0710 00:27:17.755491 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-cilium-run\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755681 kubelet[3162]: I0710 00:27:17.755499 3162 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-bpf-maps\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755681 kubelet[3162]: I0710 00:27:17.755507 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d5b6c75-cd14-4efd-9590-8c60684cd143-cilium-config-path\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755681 kubelet[3162]: I0710 00:27:17.755517 3162 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-etc-cni-netd\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755681 kubelet[3162]: I0710 00:27:17.755528 3162 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-host-proc-sys-kernel\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755681 kubelet[3162]: I0710 00:27:17.755538 3162 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de61c3b0-1008-4f61-b862-a166ebc14d6a-hubble-tls\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.755681 kubelet[3162]: I0710 00:27:17.755549 3162 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de61c3b0-1008-4f61-b862-a166ebc14d6a-lib-modules\") on node \"ci-4344.1.1-n-69725f0cc9\" DevicePath \"\"" Jul 10 00:27:17.830762 systemd[1]: Removed slice kubepods-besteffort-pod9d5b6c75_cd14_4efd_9590_8c60684cd143.slice - libcontainer container kubepods-besteffort-pod9d5b6c75_cd14_4efd_9590_8c60684cd143.slice. Jul 10 00:27:17.831899 systemd[1]: Removed slice kubepods-burstable-podde61c3b0_1008_4f61_b862_a166ebc14d6a.slice - libcontainer container kubepods-burstable-podde61c3b0_1008_4f61_b862_a166ebc14d6a.slice. Jul 10 00:27:17.832090 systemd[1]: kubepods-burstable-podde61c3b0_1008_4f61_b862_a166ebc14d6a.slice: Consumed 5.237s CPU time, 125.5M memory peak, 152K read from disk, 13.3M written to disk. Jul 10 00:27:18.135817 kubelet[3162]: I0710 00:27:18.135445 3162 scope.go:117] "RemoveContainer" containerID="609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec" Jul 10 00:27:18.142181 containerd[1742]: time="2025-07-10T00:27:18.142087246Z" level=info msg="RemoveContainer for \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\"" Jul 10 00:27:18.149923 containerd[1742]: time="2025-07-10T00:27:18.149893432Z" level=info msg="RemoveContainer for \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" returns successfully" Jul 10 00:27:18.150126 kubelet[3162]: I0710 00:27:18.150110 3162 scope.go:117] "RemoveContainer" containerID="e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf" Jul 10 00:27:18.152991 containerd[1742]: time="2025-07-10T00:27:18.152965760Z" level=info msg="RemoveContainer for \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\"" Jul 10 00:27:18.160442 containerd[1742]: time="2025-07-10T00:27:18.160415802Z" level=info msg="RemoveContainer for \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\" returns successfully" Jul 10 00:27:18.160616 kubelet[3162]: I0710 00:27:18.160593 3162 scope.go:117] "RemoveContainer" containerID="da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80" Jul 10 00:27:18.162485 containerd[1742]: time="2025-07-10T00:27:18.162462132Z" level=info msg="RemoveContainer for \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\"" Jul 10 00:27:18.171963 containerd[1742]: time="2025-07-10T00:27:18.171405525Z" level=info msg="RemoveContainer for \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\" returns successfully" Jul 10 00:27:18.173246 kubelet[3162]: I0710 00:27:18.173163 3162 scope.go:117] "RemoveContainer" containerID="5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e" Jul 10 00:27:18.175464 containerd[1742]: time="2025-07-10T00:27:18.175444085Z" level=info msg="RemoveContainer for \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\"" Jul 10 00:27:18.181592 containerd[1742]: time="2025-07-10T00:27:18.181568140Z" level=info msg="RemoveContainer for \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\" returns successfully" Jul 10 00:27:18.181768 kubelet[3162]: I0710 00:27:18.181752 3162 scope.go:117] "RemoveContainer" containerID="6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d" Jul 10 00:27:18.182943 containerd[1742]: time="2025-07-10T00:27:18.182918401Z" level=info msg="RemoveContainer for \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\"" Jul 10 00:27:18.188469 containerd[1742]: time="2025-07-10T00:27:18.188444888Z" level=info msg="RemoveContainer for \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\" returns successfully" Jul 10 00:27:18.188632 kubelet[3162]: I0710 00:27:18.188596 3162 scope.go:117] "RemoveContainer" containerID="609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec" Jul 10 00:27:18.188915 containerd[1742]: time="2025-07-10T00:27:18.188883908Z" level=error msg="ContainerStatus for \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\": not found" Jul 10 00:27:18.189006 kubelet[3162]: E0710 00:27:18.188988 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\": not found" containerID="609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec" Jul 10 00:27:18.189057 kubelet[3162]: I0710 00:27:18.189014 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec"} err="failed to get container status \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\": rpc error: code = NotFound desc = an error occurred when try to find container \"609922447c71433fe2556dd875e06da5b762cf98824c74e3e17b1a415d1e1bec\": not found" Jul 10 00:27:18.189057 kubelet[3162]: I0710 00:27:18.189049 3162 scope.go:117] "RemoveContainer" containerID="e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf" Jul 10 00:27:18.189260 containerd[1742]: time="2025-07-10T00:27:18.189221528Z" level=error msg="ContainerStatus for \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\": not found" Jul 10 00:27:18.189354 kubelet[3162]: E0710 00:27:18.189334 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\": not found" containerID="e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf" Jul 10 00:27:18.189392 kubelet[3162]: I0710 00:27:18.189355 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf"} err="failed to get container status \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6d6a885f79c82423e630f8559f53f6b7f7da4c7cd0b0ae7dad9768a639eeccf\": not found" Jul 10 00:27:18.189392 kubelet[3162]: I0710 00:27:18.189380 3162 scope.go:117] "RemoveContainer" containerID="da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80" Jul 10 00:27:18.189556 containerd[1742]: time="2025-07-10T00:27:18.189520433Z" level=error msg="ContainerStatus for \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\": not found" Jul 10 00:27:18.189629 kubelet[3162]: E0710 00:27:18.189607 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\": not found" containerID="da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80" Jul 10 00:27:18.189660 kubelet[3162]: I0710 00:27:18.189627 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80"} err="failed to get container status \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\": rpc error: code = NotFound desc = an error occurred when try to find container \"da11627b0bee780285a597da4c73db5c5328953cb3cb481fd7ec0ddec0708b80\": not found" Jul 10 00:27:18.189660 kubelet[3162]: I0710 00:27:18.189644 3162 scope.go:117] "RemoveContainer" containerID="5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e" Jul 10 00:27:18.189854 containerd[1742]: time="2025-07-10T00:27:18.189814294Z" level=error msg="ContainerStatus for \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\": not found" Jul 10 00:27:18.189925 kubelet[3162]: E0710 00:27:18.189910 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\": not found" containerID="5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e" Jul 10 00:27:18.189958 kubelet[3162]: I0710 00:27:18.189930 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e"} err="failed to get container status \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b88aad916d205b951536b3a45994d217cb5d0e5f05521b01834bec415b1ea8e\": not found" Jul 10 00:27:18.189958 kubelet[3162]: I0710 00:27:18.189944 3162 scope.go:117] "RemoveContainer" containerID="6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d" Jul 10 00:27:18.190080 containerd[1742]: time="2025-07-10T00:27:18.190057680Z" level=error msg="ContainerStatus for \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\": not found" Jul 10 00:27:18.190164 kubelet[3162]: E0710 00:27:18.190143 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\": not found" containerID="6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d" Jul 10 00:27:18.190200 kubelet[3162]: I0710 00:27:18.190162 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d"} err="failed to get container status \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c69f1bac4b729413a5cb4b7b9189dc3de405a2e37e3a54e506b03cb1cc1587d\": not found" Jul 10 00:27:18.190200 kubelet[3162]: I0710 00:27:18.190183 3162 scope.go:117] "RemoveContainer" containerID="d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200" Jul 10 00:27:18.191384 containerd[1742]: time="2025-07-10T00:27:18.191349870Z" level=info msg="RemoveContainer for \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\"" Jul 10 00:27:18.198858 containerd[1742]: time="2025-07-10T00:27:18.198834892Z" level=info msg="RemoveContainer for \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" returns successfully" Jul 10 00:27:18.199013 kubelet[3162]: I0710 00:27:18.198992 3162 scope.go:117] "RemoveContainer" containerID="d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200" Jul 10 00:27:18.199195 containerd[1742]: time="2025-07-10T00:27:18.199159177Z" level=error msg="ContainerStatus for \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\": not found" Jul 10 00:27:18.199318 kubelet[3162]: E0710 00:27:18.199300 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\": not found" containerID="d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200" Jul 10 00:27:18.199381 kubelet[3162]: I0710 00:27:18.199321 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200"} err="failed to get container status \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6fdab593800a3acd8cbb8abb48c4f221b4f7a3ed9cba0c845b41006fc90b200\": not found" Jul 10 00:27:18.378894 systemd[1]: var-lib-kubelet-pods-9d5b6c75\x2dcd14\x2d4efd\x2d9590\x2d8c60684cd143-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d552p8.mount: Deactivated successfully. Jul 10 00:27:18.378992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c7b0bf4f9a73ed58d868dc4bfd5897306a3ce3e0fdeb2e03d7dba25817f14f7-shm.mount: Deactivated successfully. Jul 10 00:27:18.379055 systemd[1]: var-lib-kubelet-pods-de61c3b0\x2d1008\x2d4f61\x2db862\x2da166ebc14d6a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh7mfz.mount: Deactivated successfully. Jul 10 00:27:18.379118 systemd[1]: var-lib-kubelet-pods-de61c3b0\x2d1008\x2d4f61\x2db862\x2da166ebc14d6a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:27:18.379176 systemd[1]: var-lib-kubelet-pods-de61c3b0\x2d1008\x2d4f61\x2db862\x2da166ebc14d6a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:27:19.391687 sshd[4695]: Connection closed by 10.200.16.10 port 43208 Jul 10 00:27:19.392322 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:19.395143 systemd[1]: sshd@20-10.200.8.5:22-10.200.16.10:43208.service: Deactivated successfully. Jul 10 00:27:19.396784 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:27:19.398563 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:27:19.399388 systemd-logind[1709]: Removed session 23. Jul 10 00:27:19.504005 systemd[1]: Started sshd@21-10.200.8.5:22-10.200.16.10:43210.service - OpenSSH per-connection server daemon (10.200.16.10:43210). Jul 10 00:27:19.827563 kubelet[3162]: I0710 00:27:19.827457 3162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5b6c75-cd14-4efd-9590-8c60684cd143" path="/var/lib/kubelet/pods/9d5b6c75-cd14-4efd-9590-8c60684cd143/volumes" Jul 10 00:27:19.828120 kubelet[3162]: I0710 00:27:19.828083 3162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de61c3b0-1008-4f61-b862-a166ebc14d6a" path="/var/lib/kubelet/pods/de61c3b0-1008-4f61-b862-a166ebc14d6a/volumes" Jul 10 00:27:19.904374 kubelet[3162]: E0710 00:27:19.904336 3162 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:27:20.129387 sshd[4852]: Accepted publickey for core from 10.200.16.10 port 43210 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:20.130399 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:20.134845 systemd-logind[1709]: New session 24 of user core. Jul 10 00:27:20.138871 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:27:20.977033 systemd[1]: Created slice kubepods-burstable-poda7ec6699_561f_456f_953f_da95c4cc496f.slice - libcontainer container kubepods-burstable-poda7ec6699_561f_456f_953f_da95c4cc496f.slice. Jul 10 00:27:21.036594 sshd[4854]: Connection closed by 10.200.16.10 port 43210 Jul 10 00:27:21.035861 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:21.039618 systemd[1]: sshd@21-10.200.8.5:22-10.200.16.10:43210.service: Deactivated successfully. Jul 10 00:27:21.042149 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:27:21.045352 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:27:21.046540 systemd-logind[1709]: Removed session 24. Jul 10 00:27:21.070840 kubelet[3162]: I0710 00:27:21.070817 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-bpf-maps\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071427 kubelet[3162]: I0710 00:27:21.071113 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7ec6699-561f-456f-953f-da95c4cc496f-cilium-config-path\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071427 kubelet[3162]: I0710 00:27:21.071142 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-hostproc\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071427 kubelet[3162]: I0710 00:27:21.071159 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-cni-path\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071427 kubelet[3162]: I0710 00:27:21.071177 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-cilium-run\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071427 kubelet[3162]: I0710 00:27:21.071194 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-host-proc-sys-kernel\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071427 kubelet[3162]: I0710 00:27:21.071210 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7ec6699-561f-456f-953f-da95c4cc496f-hubble-tls\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071585 kubelet[3162]: I0710 00:27:21.071227 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27vm\" (UniqueName: \"kubernetes.io/projected/a7ec6699-561f-456f-953f-da95c4cc496f-kube-api-access-b27vm\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071585 kubelet[3162]: I0710 00:27:21.071244 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-etc-cni-netd\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071585 kubelet[3162]: I0710 00:27:21.071261 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a7ec6699-561f-456f-953f-da95c4cc496f-cilium-ipsec-secrets\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071585 kubelet[3162]: I0710 00:27:21.071282 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-lib-modules\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071585 kubelet[3162]: I0710 00:27:21.071299 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-xtables-lock\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071712 kubelet[3162]: I0710 00:27:21.071320 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7ec6699-561f-456f-953f-da95c4cc496f-clustermesh-secrets\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071712 kubelet[3162]: I0710 00:27:21.071336 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-cilium-cgroup\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.071712 kubelet[3162]: I0710 00:27:21.071352 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7ec6699-561f-456f-953f-da95c4cc496f-host-proc-sys-net\") pod \"cilium-hn5wd\" (UID: \"a7ec6699-561f-456f-953f-da95c4cc496f\") " pod="kube-system/cilium-hn5wd" Jul 10 00:27:21.151361 systemd[1]: Started sshd@22-10.200.8.5:22-10.200.16.10:39700.service - OpenSSH per-connection server daemon (10.200.16.10:39700). Jul 10 00:27:21.281741 containerd[1742]: time="2025-07-10T00:27:21.281653244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hn5wd,Uid:a7ec6699-561f-456f-953f-da95c4cc496f,Namespace:kube-system,Attempt:0,}" Jul 10 00:27:21.312721 containerd[1742]: time="2025-07-10T00:27:21.312436109Z" level=info msg="connecting to shim dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79" address="unix:///run/containerd/s/611b45ec62fc06652af08c21f1614d742cd0a15faf8d60c5491fe0b37efcedaf" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:27:21.332838 systemd[1]: Started cri-containerd-dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79.scope - libcontainer container dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79. Jul 10 00:27:21.353503 containerd[1742]: time="2025-07-10T00:27:21.353481136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hn5wd,Uid:a7ec6699-561f-456f-953f-da95c4cc496f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\"" Jul 10 00:27:21.362695 containerd[1742]: time="2025-07-10T00:27:21.362664678Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:27:21.375264 containerd[1742]: time="2025-07-10T00:27:21.375237456Z" level=info msg="Container 3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:21.385945 containerd[1742]: time="2025-07-10T00:27:21.385917493Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894\"" Jul 10 00:27:21.387052 containerd[1742]: time="2025-07-10T00:27:21.386250505Z" level=info msg="StartContainer for \"3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894\"" Jul 10 00:27:21.387279 containerd[1742]: time="2025-07-10T00:27:21.387247085Z" level=info msg="connecting to shim 3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894" address="unix:///run/containerd/s/611b45ec62fc06652af08c21f1614d742cd0a15faf8d60c5491fe0b37efcedaf" protocol=ttrpc version=3 Jul 10 00:27:21.405820 systemd[1]: Started cri-containerd-3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894.scope - libcontainer container 3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894. Jul 10 00:27:21.429518 containerd[1742]: time="2025-07-10T00:27:21.429492694Z" level=info msg="StartContainer for \"3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894\" returns successfully" Jul 10 00:27:21.433896 systemd[1]: cri-containerd-3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894.scope: Deactivated successfully. Jul 10 00:27:21.436000 containerd[1742]: time="2025-07-10T00:27:21.435972966Z" level=info msg="received exit event container_id:\"3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894\" id:\"3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894\" pid:4930 exited_at:{seconds:1752107241 nanos:435791323}" Jul 10 00:27:21.436109 containerd[1742]: time="2025-07-10T00:27:21.435990924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894\" id:\"3373c4ca6b6dff9f30fbb496e02e3c8eea225f70c341f276fcabca662ef07894\" pid:4930 exited_at:{seconds:1752107241 nanos:435791323}" Jul 10 00:27:21.802725 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 39700 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:21.803855 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:21.807782 systemd-logind[1709]: New session 25 of user core. Jul 10 00:27:21.812843 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:27:22.158198 containerd[1742]: time="2025-07-10T00:27:22.158120649Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:27:22.171049 containerd[1742]: time="2025-07-10T00:27:22.170994716Z" level=info msg="Container 5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:22.194962 containerd[1742]: time="2025-07-10T00:27:22.194933258Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da\"" Jul 10 00:27:22.195391 containerd[1742]: time="2025-07-10T00:27:22.195292625Z" level=info msg="StartContainer for \"5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da\"" Jul 10 00:27:22.196179 containerd[1742]: time="2025-07-10T00:27:22.196145104Z" level=info msg="connecting to shim 5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da" address="unix:///run/containerd/s/611b45ec62fc06652af08c21f1614d742cd0a15faf8d60c5491fe0b37efcedaf" protocol=ttrpc version=3 Jul 10 00:27:22.217842 systemd[1]: Started cri-containerd-5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da.scope - libcontainer container 5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da. Jul 10 00:27:22.242505 sshd[4964]: Connection closed by 10.200.16.10 port 39700 Jul 10 00:27:22.243310 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:22.246865 systemd[1]: sshd@22-10.200.8.5:22-10.200.16.10:39700.service: Deactivated successfully. Jul 10 00:27:22.249021 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:27:22.252003 systemd[1]: cri-containerd-5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da.scope: Deactivated successfully. Jul 10 00:27:22.252082 containerd[1742]: time="2025-07-10T00:27:22.251690793Z" level=info msg="StartContainer for \"5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da\" returns successfully" Jul 10 00:27:22.252344 containerd[1742]: time="2025-07-10T00:27:22.252006156Z" level=info msg="received exit event container_id:\"5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da\" id:\"5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da\" pid:4980 exited_at:{seconds:1752107242 nanos:251374006}" Jul 10 00:27:22.254084 containerd[1742]: time="2025-07-10T00:27:22.253023506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da\" id:\"5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da\" pid:4980 exited_at:{seconds:1752107242 nanos:251374006}" Jul 10 00:27:22.255005 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:27:22.257414 systemd-logind[1709]: Removed session 25. Jul 10 00:27:22.269110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5af1888b617ae27df0f8ab6ee088ce97de70c44398bceadc056ddf1cc11d72da-rootfs.mount: Deactivated successfully. Jul 10 00:27:22.353942 systemd[1]: Started sshd@23-10.200.8.5:22-10.200.16.10:39714.service - OpenSSH per-connection server daemon (10.200.16.10:39714). Jul 10 00:27:22.980807 sshd[5015]: Accepted publickey for core from 10.200.16.10 port 39714 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:22.982150 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:22.987177 systemd-logind[1709]: New session 26 of user core. Jul 10 00:27:22.992855 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:27:23.163651 containerd[1742]: time="2025-07-10T00:27:23.163610447Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:27:23.180722 containerd[1742]: time="2025-07-10T00:27:23.179097320Z" level=info msg="Container f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:23.194826 containerd[1742]: time="2025-07-10T00:27:23.194795466Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660\"" Jul 10 00:27:23.195640 containerd[1742]: time="2025-07-10T00:27:23.195454601Z" level=info msg="StartContainer for \"f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660\"" Jul 10 00:27:23.196987 containerd[1742]: time="2025-07-10T00:27:23.196942174Z" level=info msg="connecting to shim f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660" address="unix:///run/containerd/s/611b45ec62fc06652af08c21f1614d742cd0a15faf8d60c5491fe0b37efcedaf" protocol=ttrpc version=3 Jul 10 00:27:23.214885 systemd[1]: Started cri-containerd-f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660.scope - libcontainer container f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660. Jul 10 00:27:23.241275 systemd[1]: cri-containerd-f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660.scope: Deactivated successfully. Jul 10 00:27:23.244298 containerd[1742]: time="2025-07-10T00:27:23.244227185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660\" id:\"f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660\" pid:5031 exited_at:{seconds:1752107243 nanos:243638702}" Jul 10 00:27:23.244298 containerd[1742]: time="2025-07-10T00:27:23.244294600Z" level=info msg="received exit event container_id:\"f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660\" id:\"f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660\" pid:5031 exited_at:{seconds:1752107243 nanos:243638702}" Jul 10 00:27:23.251157 containerd[1742]: time="2025-07-10T00:27:23.251131016Z" level=info msg="StartContainer for \"f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660\" returns successfully" Jul 10 00:27:23.261551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4318af11f35369bdcffd251cf589c59e1cf7a713b18c9e0c868f8216cbc7660-rootfs.mount: Deactivated successfully. Jul 10 00:27:23.813439 kubelet[3162]: I0710 00:27:23.813377 3162 setters.go:618] "Node became not ready" node="ci-4344.1.1-n-69725f0cc9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:27:23Z","lastTransitionTime":"2025-07-10T00:27:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:27:24.168599 containerd[1742]: time="2025-07-10T00:27:24.168313070Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:27:24.185657 containerd[1742]: time="2025-07-10T00:27:24.184832733Z" level=info msg="Container c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:24.196383 containerd[1742]: time="2025-07-10T00:27:24.196354394Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231\"" Jul 10 00:27:24.197439 containerd[1742]: time="2025-07-10T00:27:24.196728890Z" level=info msg="StartContainer for \"c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231\"" Jul 10 00:27:24.197640 containerd[1742]: time="2025-07-10T00:27:24.197618330Z" level=info msg="connecting to shim c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231" address="unix:///run/containerd/s/611b45ec62fc06652af08c21f1614d742cd0a15faf8d60c5491fe0b37efcedaf" protocol=ttrpc version=3 Jul 10 00:27:24.217893 systemd[1]: Started cri-containerd-c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231.scope - libcontainer container c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231. Jul 10 00:27:24.237169 systemd[1]: cri-containerd-c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231.scope: Deactivated successfully. Jul 10 00:27:24.239081 containerd[1742]: time="2025-07-10T00:27:24.239060233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231\" id:\"c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231\" pid:5076 exited_at:{seconds:1752107244 nanos:238683046}" Jul 10 00:27:24.241176 containerd[1742]: time="2025-07-10T00:27:24.241074725Z" level=info msg="received exit event container_id:\"c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231\" id:\"c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231\" pid:5076 exited_at:{seconds:1752107244 nanos:238683046}" Jul 10 00:27:24.246622 containerd[1742]: time="2025-07-10T00:27:24.246602542Z" level=info msg="StartContainer for \"c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231\" returns successfully" Jul 10 00:27:24.255606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5fb95f6ff11020bf9039d7cc1d8f6aa9aada2062ceb24c95ecd98b7af346231-rootfs.mount: Deactivated successfully. Jul 10 00:27:24.905381 kubelet[3162]: E0710 00:27:24.905320 3162 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:27:25.179130 containerd[1742]: time="2025-07-10T00:27:25.178612919Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:27:25.195203 containerd[1742]: time="2025-07-10T00:27:25.195176307Z" level=info msg="Container fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:25.208665 containerd[1742]: time="2025-07-10T00:27:25.208639735Z" level=info msg="CreateContainer within sandbox \"dcaf44b72c19b8d7210050877b46eafc88aa860a7ea7e4e9c81741b91083aa79\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3\"" Jul 10 00:27:25.209084 containerd[1742]: time="2025-07-10T00:27:25.209002012Z" level=info msg="StartContainer for \"fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3\"" Jul 10 00:27:25.210112 containerd[1742]: time="2025-07-10T00:27:25.209835700Z" level=info msg="connecting to shim fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3" address="unix:///run/containerd/s/611b45ec62fc06652af08c21f1614d742cd0a15faf8d60c5491fe0b37efcedaf" protocol=ttrpc version=3 Jul 10 00:27:25.229857 systemd[1]: Started cri-containerd-fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3.scope - libcontainer container fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3. Jul 10 00:27:25.258420 containerd[1742]: time="2025-07-10T00:27:25.258393473Z" level=info msg="StartContainer for \"fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3\" returns successfully" Jul 10 00:27:25.316068 containerd[1742]: time="2025-07-10T00:27:25.316032667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3\" id:\"7208710ace373df12cc4aeae11147a382393ff3b9dc76fe4dd93fcad2d344b3a\" pid:5144 exited_at:{seconds:1752107245 nanos:315823620}" Jul 10 00:27:25.535730 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jul 10 00:27:26.188136 kubelet[3162]: I0710 00:27:26.188019 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hn5wd" podStartSLOduration=6.188001907 podStartE2EDuration="6.188001907s" podCreationTimestamp="2025-07-10 00:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:27:26.187616617 +0000 UTC m=+156.447002937" watchObservedRunningTime="2025-07-10 00:27:26.188001907 +0000 UTC m=+156.447388225" Jul 10 00:27:27.503629 containerd[1742]: time="2025-07-10T00:27:27.503549394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3\" id:\"78af890f1a798bf64d49f46925be113f62381166847b7c9c0505acec01269097\" pid:5463 exit_status:1 exited_at:{seconds:1752107247 nanos:503297998}" Jul 10 00:27:28.003243 systemd-networkd[1362]: lxc_health: Link UP Jul 10 00:27:28.012547 systemd-networkd[1362]: lxc_health: Gained carrier Jul 10 00:27:29.627880 containerd[1742]: time="2025-07-10T00:27:29.627837747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3\" id:\"152c513849aa067f692452a311d13e7b40faea9f425e619e42f830688fed775b\" pid:5680 exited_at:{seconds:1752107249 nanos:627547375}" Jul 10 00:27:29.671916 systemd-networkd[1362]: lxc_health: Gained IPv6LL Jul 10 00:27:31.738393 containerd[1742]: time="2025-07-10T00:27:31.738306319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3\" id:\"7fcf03b4f410ea751af7cf61f93c91ded764606b01545e974bf5ca6297f3388a\" pid:5709 exited_at:{seconds:1752107251 nanos:737978017}" Jul 10 00:27:33.918517 containerd[1742]: time="2025-07-10T00:27:33.918470143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe1fc7234982e34551a0b24c318887b5e74362d14a388404f0a51a06636611b3\" id:\"77ae50b4f257519efab54e4c7e0d3470bf0b6f100de3d82c753dd06f4578a09b\" pid:5741 exited_at:{seconds:1752107253 nanos:918163808}" Jul 10 00:27:34.021803 sshd[5017]: Connection closed by 10.200.16.10 port 39714 Jul 10 00:27:34.022378 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:34.025658 systemd[1]: sshd@23-10.200.8.5:22-10.200.16.10:39714.service: Deactivated successfully. Jul 10 00:27:34.027376 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:27:34.028312 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:27:34.029440 systemd-logind[1709]: Removed session 26.