Jul 10 00:24:59.981623 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:24:59.981655 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:24:59.981666 kernel: BIOS-provided physical RAM map: Jul 10 00:24:59.981674 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:24:59.981680 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 10 00:24:59.981686 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 10 00:24:59.981693 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 10 00:24:59.981703 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 10 00:24:59.981709 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 10 00:24:59.981716 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 10 00:24:59.981722 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 10 00:24:59.981729 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 10 00:24:59.981735 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 10 00:24:59.981742 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 10 00:24:59.981752 kernel: NX (Execute Disable) protection: active Jul 10 00:24:59.981758 kernel: APIC: Static calls initialized Jul 10 00:24:59.981764 kernel: efi: EFI v2.7 by Microsoft Jul 10 00:24:59.981770 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab5518 RNG=0x3ffd2018 Jul 10 00:24:59.981778 kernel: random: crng init done Jul 10 00:24:59.981784 kernel: secureboot: Secure boot disabled Jul 10 00:24:59.981791 kernel: SMBIOS 3.1.0 present. Jul 10 00:24:59.981798 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 10 00:24:59.981805 kernel: DMI: Memory slots populated: 2/2 Jul 10 00:24:59.981816 kernel: Hypervisor detected: Microsoft Hyper-V Jul 10 00:24:59.981823 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 10 00:24:59.981831 kernel: Hyper-V: Nested features: 0x3e0101 Jul 10 00:24:59.981838 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 10 00:24:59.981845 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 10 00:24:59.981852 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 10 00:24:59.981859 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 10 00:24:59.981867 kernel: tsc: Detected 2300.000 MHz processor Jul 10 00:24:59.981874 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:24:59.981881 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:24:59.981891 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 10 00:24:59.981899 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:24:59.981906 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:24:59.981913 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 10 00:24:59.981919 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 10 00:24:59.981927 kernel: Using GB pages for direct mapping Jul 10 00:24:59.981934 kernel: ACPI: Early table checksum verification disabled Jul 10 00:24:59.981945 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 10 00:24:59.981956 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:24:59.981963 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:24:59.981972 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 10 00:24:59.981979 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 10 00:24:59.981986 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:24:59.981993 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:24:59.982001 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:24:59.982008 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 10 00:24:59.982016 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 10 00:24:59.982024 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:24:59.982031 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 10 00:24:59.982040 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 10 00:24:59.982046 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 10 00:24:59.982054 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 10 00:24:59.982060 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 10 00:24:59.982073 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 10 00:24:59.982080 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 10 00:24:59.982086 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 10 00:24:59.982091 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 10 00:24:59.982098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 10 00:24:59.982105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 10 00:24:59.982111 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 10 00:24:59.982118 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 10 00:24:59.982125 kernel: Zone ranges: Jul 10 00:24:59.982133 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:24:59.982140 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 10 00:24:59.982146 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 10 00:24:59.982153 kernel: Device empty Jul 10 00:24:59.982160 kernel: Movable zone start for each node Jul 10 00:24:59.982167 kernel: Early memory node ranges Jul 10 00:24:59.982173 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 00:24:59.982180 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 10 00:24:59.982187 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 10 00:24:59.982195 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 10 00:24:59.982202 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 10 00:24:59.982209 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 10 00:24:59.982216 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:24:59.982223 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 00:24:59.982230 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 10 00:24:59.982236 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 10 00:24:59.982243 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 10 00:24:59.982250 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:24:59.982258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:24:59.982265 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:24:59.982272 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 10 00:24:59.982279 kernel: TSC deadline timer available Jul 10 00:24:59.982286 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:24:59.982292 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:24:59.982299 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:24:59.982306 kernel: CPU topo: Max. threads per core: 2 Jul 10 00:24:59.982313 kernel: CPU topo: Num. cores per package: 1 Jul 10 00:24:59.982321 kernel: CPU topo: Num. threads per package: 2 Jul 10 00:24:59.982328 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 00:24:59.982334 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 10 00:24:59.982341 kernel: Booting paravirtualized kernel on Hyper-V Jul 10 00:24:59.982348 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:24:59.982355 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 00:24:59.982362 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 00:24:59.982369 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 00:24:59.982376 kernel: pcpu-alloc: [0] 0 1 Jul 10 00:24:59.982384 kernel: Hyper-V: PV spinlocks enabled Jul 10 00:24:59.982391 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:24:59.982398 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:24:59.982406 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:24:59.982413 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 10 00:24:59.982419 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:24:59.982426 kernel: Fallback order for Node 0: 0 Jul 10 00:24:59.982433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 10 00:24:59.982441 kernel: Policy zone: Normal Jul 10 00:24:59.982448 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:24:59.982454 kernel: software IO TLB: area num 2. Jul 10 00:24:59.982461 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:24:59.982468 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:24:59.982475 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:24:59.982482 kernel: Dynamic Preempt: voluntary Jul 10 00:24:59.982489 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:24:59.982497 kernel: rcu: RCU event tracing is enabled. Jul 10 00:24:59.982511 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:24:59.982519 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:24:59.982527 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:24:59.982549 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:24:59.982556 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:24:59.982564 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:24:59.982572 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:24:59.982580 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:24:59.982587 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:24:59.982596 kernel: Using NULL legacy PIC Jul 10 00:24:59.982607 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 10 00:24:59.982616 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:24:59.982624 kernel: Console: colour dummy device 80x25 Jul 10 00:24:59.982631 kernel: printk: legacy console [tty1] enabled Jul 10 00:24:59.982638 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:24:59.982645 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 10 00:24:59.982651 kernel: ACPI: Core revision 20240827 Jul 10 00:24:59.982658 kernel: Failed to register legacy timer interrupt Jul 10 00:24:59.982664 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:24:59.982671 kernel: x2apic enabled Jul 10 00:24:59.982677 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:24:59.982684 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 10 00:24:59.982690 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 10 00:24:59.982697 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 10 00:24:59.982703 kernel: Hyper-V: Using IPI hypercalls Jul 10 00:24:59.982710 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 10 00:24:59.982718 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 10 00:24:59.982725 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 10 00:24:59.982732 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 10 00:24:59.982739 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 10 00:24:59.982746 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 10 00:24:59.982753 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 10 00:24:59.982761 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jul 10 00:24:59.982768 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:24:59.982776 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 10 00:24:59.982783 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 10 00:24:59.982790 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:24:59.982797 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:24:59.982803 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:24:59.982810 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 10 00:24:59.982818 kernel: RETBleed: Vulnerable Jul 10 00:24:59.982825 kernel: Speculative Store Bypass: Vulnerable Jul 10 00:24:59.982832 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:24:59.982839 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:24:59.982846 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:24:59.982855 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:24:59.982862 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 10 00:24:59.982869 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 10 00:24:59.982877 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 10 00:24:59.982884 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 10 00:24:59.982891 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 10 00:24:59.982898 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 10 00:24:59.982906 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:24:59.982913 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 10 00:24:59.982920 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 10 00:24:59.982927 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 10 00:24:59.982935 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 10 00:24:59.982942 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 10 00:24:59.982950 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 10 00:24:59.982957 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 10 00:24:59.982965 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:24:59.982972 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:24:59.982979 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:24:59.982986 kernel: landlock: Up and running. Jul 10 00:24:59.982993 kernel: SELinux: Initializing. Jul 10 00:24:59.983000 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:24:59.983008 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:24:59.983015 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 10 00:24:59.983024 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 10 00:24:59.983031 kernel: signal: max sigframe size: 11952 Jul 10 00:24:59.983039 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:24:59.983047 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:24:59.983056 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:24:59.983064 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:24:59.983072 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:24:59.983081 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:24:59.983090 kernel: .... node #0, CPUs: #1 Jul 10 00:24:59.983106 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:24:59.983115 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jul 10 00:24:59.983123 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 299988K reserved, 0K cma-reserved) Jul 10 00:24:59.983131 kernel: devtmpfs: initialized Jul 10 00:24:59.983139 kernel: x86/mm: Memory block size: 128MB Jul 10 00:24:59.983147 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 10 00:24:59.983155 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:24:59.983162 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:24:59.983170 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:24:59.983179 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:24:59.983188 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:24:59.983196 kernel: audit: type=2000 audit(1752107096.028:1): state=initialized audit_enabled=0 res=1 Jul 10 00:24:59.983204 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:24:59.983212 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:24:59.983221 kernel: cpuidle: using governor menu Jul 10 00:24:59.983229 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:24:59.983237 kernel: dca service started, version 1.12.1 Jul 10 00:24:59.983246 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 10 00:24:59.983256 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 10 00:24:59.983264 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:24:59.983272 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:24:59.983281 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:24:59.983290 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:24:59.983299 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:24:59.983308 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:24:59.983316 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:24:59.983327 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:24:59.983336 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:24:59.983345 kernel: ACPI: Interpreter enabled Jul 10 00:24:59.983353 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:24:59.983362 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:24:59.983371 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:24:59.983380 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 10 00:24:59.983389 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 10 00:24:59.983398 kernel: iommu: Default domain type: Translated Jul 10 00:24:59.983407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:24:59.983417 kernel: efivars: Registered efivars operations Jul 10 00:24:59.983426 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:24:59.983434 kernel: PCI: System does not support PCI Jul 10 00:24:59.983443 kernel: vgaarb: loaded Jul 10 00:24:59.983452 kernel: clocksource: Switched to clocksource tsc-early Jul 10 00:24:59.983461 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:24:59.983470 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:24:59.983478 kernel: pnp: PnP ACPI init Jul 10 00:24:59.983487 kernel: pnp: PnP ACPI: found 3 devices Jul 10 00:24:59.983498 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:24:59.983507 kernel: NET: Registered PF_INET protocol family Jul 10 00:24:59.983515 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:24:59.983524 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 10 00:24:59.983548 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:24:59.983558 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:24:59.983567 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 10 00:24:59.983576 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 10 00:24:59.983586 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 10 00:24:59.983595 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 10 00:24:59.983604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:24:59.983613 kernel: NET: Registered PF_XDP protocol family Jul 10 00:24:59.983621 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:24:59.983630 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 10 00:24:59.983639 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Jul 10 00:24:59.983648 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 10 00:24:59.983657 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 10 00:24:59.983668 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 10 00:24:59.983677 kernel: clocksource: Switched to clocksource tsc Jul 10 00:24:59.983686 kernel: Initialise system trusted keyrings Jul 10 00:24:59.983694 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 10 00:24:59.983703 kernel: Key type asymmetric registered Jul 10 00:24:59.983712 kernel: Asymmetric key parser 'x509' registered Jul 10 00:24:59.983721 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:24:59.983730 kernel: io scheduler mq-deadline registered Jul 10 00:24:59.983739 kernel: io scheduler kyber registered Jul 10 00:24:59.983749 kernel: io scheduler bfq registered Jul 10 00:24:59.983758 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:24:59.983767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:24:59.983776 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:24:59.983785 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 10 00:24:59.983794 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:24:59.983803 kernel: i8042: PNP: No PS/2 controller found. Jul 10 00:24:59.983935 kernel: rtc_cmos 00:02: registered as rtc0 Jul 10 00:24:59.984013 kernel: rtc_cmos 00:02: setting system clock to 2025-07-10T00:24:59 UTC (1752107099) Jul 10 00:24:59.984082 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 10 00:24:59.984093 kernel: intel_pstate: Intel P-state driver initializing Jul 10 00:24:59.984102 kernel: efifb: probing for efifb Jul 10 00:24:59.984111 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 10 00:24:59.984120 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 10 00:24:59.984129 kernel: efifb: scrolling: redraw Jul 10 00:24:59.984138 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:24:59.984147 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 00:24:59.984157 kernel: fb0: EFI VGA frame buffer device Jul 10 00:24:59.984166 kernel: pstore: Using crash dump compression: deflate Jul 10 00:24:59.984175 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:24:59.984183 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:24:59.984192 kernel: Segment Routing with IPv6 Jul 10 00:24:59.984201 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:24:59.984210 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:24:59.984219 kernel: Key type dns_resolver registered Jul 10 00:24:59.984227 kernel: IPI shorthand broadcast: enabled Jul 10 00:24:59.984238 kernel: sched_clock: Marking stable (2963003766, 88792955)->(3371825849, -320029128) Jul 10 00:24:59.984247 kernel: registered taskstats version 1 Jul 10 00:24:59.984256 kernel: Loading compiled-in X.509 certificates Jul 10 00:24:59.984265 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:24:59.984273 kernel: Demotion targets for Node 0: null Jul 10 00:24:59.984282 kernel: Key type .fscrypt registered Jul 10 00:24:59.984291 kernel: Key type fscrypt-provisioning registered Jul 10 00:24:59.984300 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:24:59.984309 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:24:59.984319 kernel: ima: No architecture policies found Jul 10 00:24:59.984328 kernel: clk: Disabling unused clocks Jul 10 00:24:59.984336 kernel: Warning: unable to open an initial console. Jul 10 00:24:59.984345 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:24:59.984354 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:24:59.984363 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:24:59.984372 kernel: Run /init as init process Jul 10 00:24:59.984381 kernel: with arguments: Jul 10 00:24:59.984389 kernel: /init Jul 10 00:24:59.984399 kernel: with environment: Jul 10 00:24:59.984408 kernel: HOME=/ Jul 10 00:24:59.984416 kernel: TERM=linux Jul 10 00:24:59.984425 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:24:59.984435 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:24:59.984448 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:24:59.984458 systemd[1]: Detected virtualization microsoft. Jul 10 00:24:59.984469 systemd[1]: Detected architecture x86-64. Jul 10 00:24:59.984478 systemd[1]: Running in initrd. Jul 10 00:24:59.984487 systemd[1]: No hostname configured, using default hostname. Jul 10 00:24:59.984497 systemd[1]: Hostname set to . Jul 10 00:24:59.984506 systemd[1]: Initializing machine ID from random generator. Jul 10 00:24:59.984516 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:24:59.984525 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:24:59.984550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:24:59.984563 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:24:59.984573 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:24:59.984582 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:24:59.984593 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:24:59.984603 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:24:59.984613 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:24:59.984622 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:24:59.984633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:24:59.984641 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:24:59.984650 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:24:59.984658 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:24:59.984667 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:24:59.984676 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:24:59.984686 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:24:59.984695 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:24:59.984705 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:24:59.984717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:24:59.984726 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:24:59.984735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:24:59.984744 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:24:59.984753 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:24:59.984763 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:24:59.984773 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:24:59.984783 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:24:59.984794 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:24:59.984804 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:24:59.984814 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:24:59.984833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:24:59.984844 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:24:59.984875 systemd-journald[205]: Collecting audit messages is disabled. Jul 10 00:24:59.984901 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:24:59.984912 systemd-journald[205]: Journal started Jul 10 00:24:59.984936 systemd-journald[205]: Runtime Journal (/run/log/journal/714671074a314d19a5d9f669487379d9) is 8M, max 158.9M, 150.9M free. Jul 10 00:24:59.988705 systemd-modules-load[206]: Inserted module 'overlay' Jul 10 00:24:59.993125 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:24:59.996645 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:25:00.003651 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:25:00.012759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:25:00.020911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:00.027766 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:25:00.036729 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:25:00.037651 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:25:00.042848 kernel: Bridge firewalling registered Jul 10 00:25:00.039873 systemd-modules-load[206]: Inserted module 'br_netfilter' Jul 10 00:25:00.044028 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:25:00.046149 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:25:00.046989 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:25:00.050630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:25:00.053666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:25:00.074343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:25:00.079173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:00.085495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:25:00.088784 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:25:00.100033 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:25:00.113907 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:25:00.141800 systemd-resolved[243]: Positive Trust Anchors: Jul 10 00:25:00.143742 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:25:00.143786 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:25:00.164810 systemd-resolved[243]: Defaulting to hostname 'linux'. Jul 10 00:25:00.167663 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:25:00.173898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:25:00.187553 kernel: SCSI subsystem initialized Jul 10 00:25:00.195551 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:25:00.204560 kernel: iscsi: registered transport (tcp) Jul 10 00:25:00.227670 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:25:00.227711 kernel: QLogic iSCSI HBA Driver Jul 10 00:25:00.240959 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:25:00.250673 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:25:00.251567 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:25:00.282798 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:25:00.285656 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:25:00.327554 kernel: raid6: avx512x4 gen() 45366 MB/s Jul 10 00:25:00.345548 kernel: raid6: avx512x2 gen() 44085 MB/s Jul 10 00:25:00.362545 kernel: raid6: avx512x1 gen() 25785 MB/s Jul 10 00:25:00.380543 kernel: raid6: avx2x4 gen() 39604 MB/s Jul 10 00:25:00.397546 kernel: raid6: avx2x2 gen() 44408 MB/s Jul 10 00:25:00.415117 kernel: raid6: avx2x1 gen() 30014 MB/s Jul 10 00:25:00.415137 kernel: raid6: using algorithm avx512x4 gen() 45366 MB/s Jul 10 00:25:00.432942 kernel: raid6: .... xor() 7619 MB/s, rmw enabled Jul 10 00:25:00.432964 kernel: raid6: using avx512x2 recovery algorithm Jul 10 00:25:00.452552 kernel: xor: automatically using best checksumming function avx Jul 10 00:25:00.566557 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:25:00.570912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:25:00.576604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:25:00.593014 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jul 10 00:25:00.596977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:25:00.605027 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:25:00.619590 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 10 00:25:00.636404 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:25:00.638666 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:25:00.676438 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:25:00.685087 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:25:00.729549 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:25:00.734554 kernel: hv_vmbus: Vmbus version:5.3 Jul 10 00:25:00.744551 kernel: AES CTR mode by8 optimization enabled Jul 10 00:25:00.746034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:25:00.746127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:00.753250 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:00.760552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:00.776916 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:25:00.776955 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:25:00.782738 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 10 00:25:00.782954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:25:00.783040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:00.790911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:00.802572 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 10 00:25:00.802606 kernel: PTP clock support registered Jul 10 00:25:00.802616 kernel: hv_vmbus: registering driver hv_pci Jul 10 00:25:00.809557 kernel: hv_vmbus: registering driver hv_netvsc Jul 10 00:25:00.815613 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 10 00:25:00.823195 kernel: hv_utils: Registering HyperV Utility Driver Jul 10 00:25:00.823232 kernel: hv_vmbus: registering driver hv_utils Jul 10 00:25:00.823249 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 10 00:25:00.830241 kernel: hv_utils: Shutdown IC version 3.2 Jul 10 00:25:00.830285 kernel: hv_utils: Heartbeat IC version 3.0 Jul 10 00:25:00.831583 kernel: hv_utils: TimeSync IC version 4.0 Jul 10 00:25:00.671920 systemd-resolved[243]: Clock change detected. Flushing caches. Jul 10 00:25:00.682092 systemd-journald[205]: Time jumped backwards, rotating. Jul 10 00:25:00.682141 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 10 00:25:00.684973 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 00:25:00.676800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:00.693257 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 10 00:25:00.698287 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 10 00:25:00.707274 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5220c091 (unnamed net_device) (uninitialized): VF slot 1 added Jul 10 00:25:00.718187 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 10 00:25:00.726199 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 00:25:00.726369 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:25:00.730173 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 10 00:25:00.737182 kernel: hv_vmbus: registering driver hid_hyperv Jul 10 00:25:00.741266 kernel: hv_vmbus: registering driver hv_storvsc Jul 10 00:25:00.743225 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 10 00:25:00.743366 kernel: scsi host0: storvsc_host_t Jul 10 00:25:00.743519 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 10 00:25:00.746600 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 10 00:25:00.756254 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 10 00:25:00.756425 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 10 00:25:01.016179 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 10 00:25:01.022183 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:25:01.026552 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 10 00:25:01.026804 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:25:01.028263 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 10 00:25:01.041183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:25:01.056183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:25:01.293182 kernel: nvme nvme0: using unchecked data buffer Jul 10 00:25:01.505859 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 10 00:25:01.529948 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 10 00:25:01.550580 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 10 00:25:01.561482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 10 00:25:01.563513 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 10 00:25:01.563738 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:25:01.565227 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:25:01.575469 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:25:01.580434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:25:01.588886 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:25:01.608272 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:25:01.621190 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:25:01.635356 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:25:01.735577 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 10 00:25:01.740490 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 10 00:25:01.740629 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 10 00:25:01.743369 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 00:25:01.761619 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 10 00:25:01.761667 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 10 00:25:01.761678 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 10 00:25:01.761690 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 10 00:25:01.778227 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 00:25:01.778388 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 10 00:25:01.778542 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 10 00:25:01.781761 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 10 00:25:01.791174 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 10 00:25:01.794488 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5220c091 eth0: VF registering: eth1 Jul 10 00:25:01.794643 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 10 00:25:01.798186 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 10 00:25:02.634727 disk-uuid[675]: The operation has completed successfully. Jul 10 00:25:02.636910 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:25:02.688084 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:25:02.688190 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:25:02.720328 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:25:02.730183 sh[718]: Success Jul 10 00:25:02.760583 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:25:02.760632 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:25:02.761950 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:25:02.770175 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:25:02.992730 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:25:02.999104 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:25:03.015305 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:25:03.028074 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:25:03.028119 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (731) Jul 10 00:25:03.031484 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:25:03.031529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:25:03.032378 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:25:03.294081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:25:03.298323 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:25:03.302226 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:25:03.305179 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:25:03.317910 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:25:03.339245 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (754) Jul 10 00:25:03.344350 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:03.344393 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:25:03.345961 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:25:03.372799 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:03.370126 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:25:03.377273 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:25:03.397056 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:25:03.400275 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:25:03.423941 systemd-networkd[900]: lo: Link UP Jul 10 00:25:03.423949 systemd-networkd[900]: lo: Gained carrier Jul 10 00:25:03.432563 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 10 00:25:03.432792 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:25:03.425608 systemd-networkd[900]: Enumeration completed Jul 10 00:25:03.425678 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:25:03.425888 systemd[1]: Reached target network.target - Network. Jul 10 00:25:03.426525 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:03.445645 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5220c091 eth0: Data path switched to VF: enP30832s1 Jul 10 00:25:03.426529 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:25:03.438490 systemd-networkd[900]: enP30832s1: Link UP Jul 10 00:25:03.438554 systemd-networkd[900]: eth0: Link UP Jul 10 00:25:03.438630 systemd-networkd[900]: eth0: Gained carrier Jul 10 00:25:03.438639 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:03.443327 systemd-networkd[900]: enP30832s1: Gained carrier Jul 10 00:25:03.455192 systemd-networkd[900]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:25:04.443416 ignition[864]: Ignition 2.21.0 Jul 10 00:25:04.443427 ignition[864]: Stage: fetch-offline Jul 10 00:25:04.445184 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:25:04.443509 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:04.449234 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:25:04.443516 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:25:04.443599 ignition[864]: parsed url from cmdline: "" Jul 10 00:25:04.443602 ignition[864]: no config URL provided Jul 10 00:25:04.443606 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:25:04.443612 ignition[864]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:25:04.443617 ignition[864]: failed to fetch config: resource requires networking Jul 10 00:25:04.443763 ignition[864]: Ignition finished successfully Jul 10 00:25:04.481988 ignition[910]: Ignition 2.21.0 Jul 10 00:25:04.481998 ignition[910]: Stage: fetch Jul 10 00:25:04.482219 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:04.482227 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:25:04.482303 ignition[910]: parsed url from cmdline: "" Jul 10 00:25:04.482306 ignition[910]: no config URL provided Jul 10 00:25:04.482311 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:25:04.482317 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:25:04.482347 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 10 00:25:04.506259 systemd-networkd[900]: enP30832s1: Gained IPv6LL Jul 10 00:25:04.542612 ignition[910]: GET result: OK Jul 10 00:25:04.542707 ignition[910]: config has been read from IMDS userdata Jul 10 00:25:04.543526 ignition[910]: parsing config with SHA512: 4f255df37765956f7820debe52dfa0a5767e0c428924d558e09036a7f3b3cc53ae7ebcb26fa08a7b417dfea55f8a4c3a98944b4b3f8f3291ee01d0dc97b5754e Jul 10 00:25:04.550244 unknown[910]: fetched base config from "system" Jul 10 00:25:04.550253 unknown[910]: fetched base config from "system" Jul 10 00:25:04.551515 ignition[910]: fetch: fetch complete Jul 10 00:25:04.550259 unknown[910]: fetched user config from "azure" Jul 10 00:25:04.551520 ignition[910]: fetch: fetch passed Jul 10 00:25:04.554726 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:25:04.551565 ignition[910]: Ignition finished successfully Jul 10 00:25:04.559295 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:25:04.591595 ignition[916]: Ignition 2.21.0 Jul 10 00:25:04.591605 ignition[916]: Stage: kargs Jul 10 00:25:04.591787 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:04.591795 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:25:04.596004 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:25:04.593760 ignition[916]: kargs: kargs passed Jul 10 00:25:04.593803 ignition[916]: Ignition finished successfully Jul 10 00:25:04.603523 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:25:04.622505 ignition[923]: Ignition 2.21.0 Jul 10 00:25:04.622526 ignition[923]: Stage: disks Jul 10 00:25:04.623403 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:04.623421 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:25:04.626934 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:25:04.624684 ignition[923]: disks: disks passed Jul 10 00:25:04.633277 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:25:04.624731 ignition[923]: Ignition finished successfully Jul 10 00:25:04.641210 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:25:04.643317 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:25:04.643686 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:25:04.643707 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:25:04.644458 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:25:04.776021 systemd-fsck[932]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 10 00:25:04.782662 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:25:04.786682 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:25:05.038174 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:25:05.038338 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:25:05.042588 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:25:05.059044 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:25:05.064662 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:25:05.071285 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 00:25:05.076087 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:25:05.076121 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:25:05.087652 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (941) Jul 10 00:25:05.078416 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:25:05.080144 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:25:05.094091 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:05.094117 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:25:05.094129 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:25:05.085696 systemd-networkd[900]: eth0: Gained IPv6LL Jul 10 00:25:05.099507 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:25:05.533975 coreos-metadata[943]: Jul 10 00:25:05.533 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 00:25:05.536252 coreos-metadata[943]: Jul 10 00:25:05.535 INFO Fetch successful Jul 10 00:25:05.536252 coreos-metadata[943]: Jul 10 00:25:05.535 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 10 00:25:05.543455 coreos-metadata[943]: Jul 10 00:25:05.543 INFO Fetch successful Jul 10 00:25:05.557271 coreos-metadata[943]: Jul 10 00:25:05.557 INFO wrote hostname ci-4344.1.1-n-e449e01ea1 to /sysroot/etc/hostname Jul 10 00:25:05.559103 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:25:05.771245 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:25:05.803579 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:25:05.822784 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:25:05.827902 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:25:06.701901 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:25:06.705663 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:25:06.714396 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:25:06.722793 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:25:06.726856 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:06.747266 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:25:06.752140 ignition[1059]: INFO : Ignition 2.21.0 Jul 10 00:25:06.752140 ignition[1059]: INFO : Stage: mount Jul 10 00:25:06.767341 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:06.767341 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:25:06.767341 ignition[1059]: INFO : mount: mount passed Jul 10 00:25:06.767341 ignition[1059]: INFO : Ignition finished successfully Jul 10 00:25:06.754275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:25:06.764831 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:25:06.779268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:25:06.796822 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1071) Jul 10 00:25:06.796921 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:06.798518 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:25:06.798532 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:25:06.804066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:25:06.825000 ignition[1087]: INFO : Ignition 2.21.0 Jul 10 00:25:06.825000 ignition[1087]: INFO : Stage: files Jul 10 00:25:06.829315 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:06.829315 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:25:06.829315 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:25:06.841883 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:25:06.841883 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:25:06.876675 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:25:06.878363 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:25:06.878363 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:25:06.876998 unknown[1087]: wrote ssh authorized keys file for user: core Jul 10 00:25:06.933105 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 00:25:06.939243 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 10 00:25:06.992201 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:25:07.166312 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 00:25:07.171255 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:25:07.171255 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:25:07.678742 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:25:07.866616 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:25:07.869438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:25:07.869438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:25:07.869438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:25:07.869438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:25:07.869438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:25:07.869438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:25:07.869438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:25:07.869438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:25:07.895189 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:25:07.895189 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:25:07.895189 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:25:07.895189 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:25:07.895189 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:25:07.895189 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 10 00:25:08.539438 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:25:09.205909 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:25:09.205909 ignition[1087]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:25:09.221337 ignition[1087]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:25:09.227735 ignition[1087]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:25:09.227735 ignition[1087]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:25:09.236970 ignition[1087]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:25:09.236970 ignition[1087]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:25:09.236970 ignition[1087]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:25:09.236970 ignition[1087]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:25:09.236970 ignition[1087]: INFO : files: files passed Jul 10 00:25:09.236970 ignition[1087]: INFO : Ignition finished successfully Jul 10 00:25:09.229423 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:25:09.234285 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:25:09.245039 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:25:09.258698 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:25:09.271276 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:25:09.258790 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:25:09.282380 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:25:09.282380 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:25:09.271347 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:25:09.272809 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:25:09.273395 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:25:09.315365 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:25:09.315449 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:25:09.317974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:25:09.326236 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:25:09.327533 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:25:09.329532 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:25:09.350237 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:25:09.353183 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:25:09.371369 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:25:09.372765 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:25:09.373238 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:25:09.373841 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:25:09.373941 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:25:09.374869 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:25:09.384582 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:25:09.392262 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:25:09.393877 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:25:09.397822 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:25:09.399412 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:25:09.402762 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:25:09.407304 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:25:09.410407 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:25:09.413598 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:25:09.418281 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:25:09.421282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:25:09.422600 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:25:09.428374 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:25:09.431296 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:25:09.434726 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:25:09.435568 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:25:09.438419 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:25:09.438548 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:25:09.441459 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:25:09.441577 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:25:09.446330 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:25:09.446443 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:25:09.451343 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 00:25:09.451460 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:25:09.454339 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:25:09.456265 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:25:09.456505 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:25:09.456635 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:25:09.456942 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:25:09.459008 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:25:09.463983 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:25:09.464066 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:25:09.496945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:25:09.501675 ignition[1141]: INFO : Ignition 2.21.0 Jul 10 00:25:09.501675 ignition[1141]: INFO : Stage: umount Jul 10 00:25:09.501675 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:09.501675 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:25:09.501675 ignition[1141]: INFO : umount: umount passed Jul 10 00:25:09.501675 ignition[1141]: INFO : Ignition finished successfully Jul 10 00:25:09.501467 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:25:09.501547 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:25:09.514488 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:25:09.514572 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:25:09.516012 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:25:09.516051 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:25:09.516239 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:25:09.516268 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:25:09.516526 systemd[1]: Stopped target network.target - Network. Jul 10 00:25:09.516554 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:25:09.516584 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:25:09.516891 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:25:09.519613 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:25:09.523540 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:25:09.541039 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:25:09.542227 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:25:09.543470 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:25:09.543505 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:25:09.547224 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:25:09.547253 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:25:09.551223 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:25:09.551273 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:25:09.552045 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:25:09.552072 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:25:09.552505 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:25:09.552688 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:25:09.561195 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:25:09.561309 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:25:09.566592 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:25:09.566764 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:25:09.566856 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:25:09.568788 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:25:09.569192 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:25:09.569278 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:25:09.569316 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:25:09.570616 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:25:09.577135 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:25:09.577235 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:25:09.639032 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5220c091 eth0: Data path switched from VF: enP30832s1 Jul 10 00:25:09.584706 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:25:09.641855 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:25:09.584743 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:09.590247 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:25:09.590293 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:25:09.593208 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:25:09.593244 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:25:09.611544 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:25:09.621228 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:25:09.621287 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:25:09.628677 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:25:09.632785 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:25:09.659460 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:25:09.659549 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:25:09.662843 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:25:09.662905 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:25:09.669237 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:25:09.669276 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:25:09.670891 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:25:09.670934 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:25:09.671876 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:25:09.671911 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:25:09.672269 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:25:09.672302 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:25:09.674262 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:25:09.674377 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:25:09.674431 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:25:09.677848 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:25:09.677890 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:25:09.678800 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:25:09.678825 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:25:09.706317 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:25:09.706372 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:25:09.712224 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:25:09.712270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:09.718002 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:25:09.718051 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 10 00:25:09.719793 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:25:09.719822 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:25:09.720066 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:25:09.720124 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:25:09.745395 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:25:09.745482 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:25:09.749729 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:25:09.750993 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:25:09.751050 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:25:09.757098 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:25:09.783576 systemd[1]: Switching root. Jul 10 00:25:09.854653 systemd-journald[205]: Journal stopped Jul 10 00:25:13.580372 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 10 00:25:13.580405 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:25:13.580417 kernel: SELinux: policy capability open_perms=1 Jul 10 00:25:13.580425 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:25:13.580432 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:25:13.580440 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:25:13.580450 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:25:13.580458 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:25:13.580466 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:25:13.580475 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:25:13.580483 kernel: audit: type=1403 audit(1752107111.274:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:25:13.580493 systemd[1]: Successfully loaded SELinux policy in 294.710ms. Jul 10 00:25:13.580504 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.725ms. Jul 10 00:25:13.580517 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:25:13.580528 systemd[1]: Detected virtualization microsoft. Jul 10 00:25:13.580537 systemd[1]: Detected architecture x86-64. Jul 10 00:25:13.580547 systemd[1]: Detected first boot. Jul 10 00:25:13.580557 systemd[1]: Hostname set to . Jul 10 00:25:13.580567 systemd[1]: Initializing machine ID from random generator. Jul 10 00:25:13.580577 zram_generator::config[1184]: No configuration found. Jul 10 00:25:13.580588 kernel: Guest personality initialized and is inactive Jul 10 00:25:13.580597 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 10 00:25:13.580626 kernel: Initialized host personality Jul 10 00:25:13.581080 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:25:13.581098 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:25:13.581113 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:25:13.581123 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:25:13.581133 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:25:13.581142 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:25:13.581152 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:25:13.581178 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:25:13.581187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:25:13.581198 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:25:13.581207 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:25:13.581215 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:25:13.581225 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:25:13.581233 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:25:13.581243 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:25:13.581252 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:25:13.581262 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:25:13.581275 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:25:13.581287 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:25:13.581297 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:25:13.581307 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:25:13.581322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:25:13.581334 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:25:13.581344 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:25:13.581353 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:25:13.581365 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:25:13.581380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:25:13.581392 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:25:13.581402 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:25:13.581412 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:25:13.581422 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:25:13.581432 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:25:13.581442 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:25:13.581454 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:25:13.581463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:25:13.581473 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:25:13.581483 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:25:13.581493 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:25:13.581505 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:25:13.581515 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:25:13.581525 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:25:13.581538 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:13.581549 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:25:13.581559 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:25:13.581570 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:25:13.581581 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:25:13.581592 systemd[1]: Reached target machines.target - Containers. Jul 10 00:25:13.581602 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:25:13.581612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:25:13.581622 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:25:13.581632 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:25:13.581642 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:25:13.581651 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:25:13.581662 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:25:13.581671 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:25:13.581682 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:25:13.581692 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:25:13.581701 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:25:13.581710 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:25:13.581719 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:25:13.581729 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:25:13.581739 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:25:13.581751 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:25:13.581761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:25:13.581771 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:25:13.581779 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:25:13.581788 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:25:13.581798 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:25:13.581806 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:25:13.581815 systemd[1]: Stopped verity-setup.service. Jul 10 00:25:13.581824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:13.581835 kernel: loop: module loaded Jul 10 00:25:13.581845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:25:13.581854 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:25:13.581863 kernel: fuse: init (API version 7.41) Jul 10 00:25:13.581872 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:25:13.581904 systemd-journald[1267]: Collecting audit messages is disabled. Jul 10 00:25:13.581928 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:25:13.581938 systemd-journald[1267]: Journal started Jul 10 00:25:13.581961 systemd-journald[1267]: Runtime Journal (/run/log/journal/a1fcf5085da84489a599f670a1132fa9) is 8M, max 158.9M, 150.9M free. Jul 10 00:25:13.190339 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:25:13.198785 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 10 00:25:13.199109 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:25:13.592189 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:25:13.597022 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:25:13.600130 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:25:13.601542 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:25:13.604378 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:25:13.604516 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:25:13.606430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:25:13.606584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:25:13.608682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:25:13.608835 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:25:13.610927 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:25:13.611075 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:25:13.616490 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:25:13.616650 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:25:13.619382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:25:13.620913 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:25:13.623434 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:25:13.626393 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:25:13.634180 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:25:13.642019 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:25:13.648232 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:25:13.654317 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:25:13.658241 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:25:13.658268 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:25:13.662640 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:25:13.672271 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:25:13.687400 kernel: ACPI: bus type drm_connector registered Jul 10 00:25:13.689550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:25:13.691027 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:25:13.695367 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:25:13.697547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:25:13.699978 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:25:13.703347 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:25:13.705217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:25:13.708261 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:25:13.712592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:25:13.715898 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:25:13.717233 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:25:13.719515 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:25:13.721797 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:25:13.727336 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:25:13.738794 systemd-journald[1267]: Time spent on flushing to /var/log/journal/a1fcf5085da84489a599f670a1132fa9 is 39.727ms for 993 entries. Jul 10 00:25:13.738794 systemd-journald[1267]: System Journal (/var/log/journal/a1fcf5085da84489a599f670a1132fa9) is 11.8M, max 2.6G, 2.6G free. Jul 10 00:25:13.819241 systemd-journald[1267]: Received client request to flush runtime journal. Jul 10 00:25:13.819278 systemd-journald[1267]: /var/log/journal/a1fcf5085da84489a599f670a1132fa9/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 10 00:25:13.819298 systemd-journald[1267]: Rotating system journal. Jul 10 00:25:13.819319 kernel: loop0: detected capacity change from 0 to 146240 Jul 10 00:25:13.736452 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:25:13.741620 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:25:13.747146 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:25:13.786632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:13.820041 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:25:13.824553 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jul 10 00:25:13.824568 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jul 10 00:25:13.853556 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:25:13.857126 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:25:13.867276 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:25:13.937321 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:25:13.939430 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:25:13.960255 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jul 10 00:25:13.960271 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jul 10 00:25:13.962623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:25:14.129177 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:25:14.156176 kernel: loop1: detected capacity change from 0 to 28496 Jul 10 00:25:14.200283 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:25:14.461193 kernel: loop2: detected capacity change from 0 to 224512 Jul 10 00:25:14.545180 kernel: loop3: detected capacity change from 0 to 113872 Jul 10 00:25:14.571113 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:25:14.575490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:25:14.604201 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jul 10 00:25:14.751038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:25:14.756198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:25:14.862960 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:25:14.877178 kernel: loop4: detected capacity change from 0 to 146240 Jul 10 00:25:14.898223 kernel: loop5: detected capacity change from 0 to 28496 Jul 10 00:25:14.904215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#169 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:25:14.909955 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:25:14.919181 kernel: loop6: detected capacity change from 0 to 224512 Jul 10 00:25:14.939959 kernel: loop7: detected capacity change from 0 to 113872 Jul 10 00:25:14.946189 kernel: hv_vmbus: registering driver hyperv_fb Jul 10 00:25:14.950216 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:25:14.955864 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 10 00:25:14.956187 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 10 00:25:14.958497 kernel: Console: switching to colour dummy device 80x25 Jul 10 00:25:14.960595 (sd-merge)[1392]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 10 00:25:14.961000 (sd-merge)[1392]: Merged extensions into '/usr'. Jul 10 00:25:14.964972 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 00:25:14.970909 kernel: hv_vmbus: registering driver hv_balloon Jul 10 00:25:14.970955 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 10 00:25:14.975489 systemd[1]: Reload requested from client PID 1324 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:25:14.975501 systemd[1]: Reloading... Jul 10 00:25:15.068181 zram_generator::config[1448]: No configuration found. Jul 10 00:25:15.258269 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:25:15.316235 systemd-networkd[1359]: lo: Link UP Jul 10 00:25:15.316243 systemd-networkd[1359]: lo: Gained carrier Jul 10 00:25:15.318002 systemd-networkd[1359]: Enumeration completed Jul 10 00:25:15.323543 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:15.323547 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:25:15.329284 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 10 00:25:15.337175 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:25:15.342174 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5220c091 eth0: Data path switched to VF: enP30832s1 Jul 10 00:25:15.342811 systemd-networkd[1359]: enP30832s1: Link UP Jul 10 00:25:15.343274 systemd-networkd[1359]: eth0: Link UP Jul 10 00:25:15.343332 systemd-networkd[1359]: eth0: Gained carrier Jul 10 00:25:15.343379 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:15.350362 systemd-networkd[1359]: enP30832s1: Gained carrier Jul 10 00:25:15.357204 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:25:15.413714 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 10 00:25:15.435334 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 10 00:25:15.438798 systemd[1]: Reloading finished in 463 ms. Jul 10 00:25:15.456986 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:25:15.458598 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:25:15.460136 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:25:15.495891 systemd[1]: Starting ensure-sysext.service... Jul 10 00:25:15.499431 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:25:15.503248 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:25:15.507252 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:25:15.511264 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:25:15.520962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:15.531720 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:25:15.531749 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:25:15.531960 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:25:15.535592 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:25:15.538336 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:25:15.539991 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jul 10 00:25:15.540101 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jul 10 00:25:15.540976 systemd[1]: Reload requested from client PID 1522 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:25:15.541055 systemd[1]: Reloading... Jul 10 00:25:15.549324 systemd-tmpfiles[1526]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:25:15.549332 systemd-tmpfiles[1526]: Skipping /boot Jul 10 00:25:15.556542 systemd-tmpfiles[1526]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:25:15.556622 systemd-tmpfiles[1526]: Skipping /boot Jul 10 00:25:15.616206 zram_generator::config[1565]: No configuration found. Jul 10 00:25:15.690288 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:25:15.784641 systemd[1]: Reloading finished in 243 ms. Jul 10 00:25:15.801324 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:25:15.803276 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:25:15.804928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:25:15.812013 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:25:15.820022 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:25:15.821997 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:25:15.824122 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:25:15.827064 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:25:15.833097 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:15.834399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:25:15.840052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:25:15.843323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:25:15.847601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:25:15.848494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:25:15.849304 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:25:15.849392 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:15.856549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:15.856752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:25:15.856934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:25:15.857047 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:25:15.857184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:15.859992 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:25:15.866422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:15.867064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:25:15.872003 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:25:15.874591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:25:15.874703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:25:15.874852 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:25:15.878370 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:15.888881 systemd[1]: Finished ensure-sysext.service. Jul 10 00:25:15.889857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:25:15.889989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:25:15.890299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:25:15.890415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:25:15.890635 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:25:15.890750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:25:15.893677 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:25:15.893904 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:25:15.898861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:25:15.899844 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:25:15.914639 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:25:15.958289 systemd-resolved[1628]: Positive Trust Anchors: Jul 10 00:25:15.958300 systemd-resolved[1628]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:25:15.958332 systemd-resolved[1628]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:25:15.961934 systemd-resolved[1628]: Using system hostname 'ci-4344.1.1-n-e449e01ea1'. Jul 10 00:25:15.964140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:25:15.965583 systemd[1]: Reached target network.target - Network. Jul 10 00:25:15.965867 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:25:15.999969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:16.090937 augenrules[1664]: No rules Jul 10 00:25:16.091751 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:25:16.091963 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:25:16.410386 systemd-networkd[1359]: enP30832s1: Gained IPv6LL Jul 10 00:25:16.450305 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:25:16.453408 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:25:16.666307 systemd-networkd[1359]: eth0: Gained IPv6LL Jul 10 00:25:16.668387 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:25:16.670470 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:25:18.440563 ldconfig[1319]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:25:18.450250 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:25:18.452888 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:25:18.473105 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:25:18.476387 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:25:18.479333 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:25:18.482225 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:25:18.483812 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:25:18.485636 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:25:18.488263 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:25:18.490042 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:25:18.491918 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:25:18.491948 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:25:18.493221 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:25:18.509832 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:25:18.513101 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:25:18.517905 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:25:18.521324 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:25:18.524205 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:25:18.527061 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:25:18.530490 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:25:18.533676 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:25:18.536811 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:25:18.538021 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:25:18.539253 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:25:18.539278 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:25:18.541083 systemd[1]: Starting chronyd.service - NTP client/server... Jul 10 00:25:18.544779 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:25:18.550276 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:25:18.553560 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:25:18.558462 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:25:18.563249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:25:18.567340 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:25:18.569799 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:25:18.572146 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:25:18.575269 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 10 00:25:18.581872 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 10 00:25:18.585292 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 10 00:25:18.587773 jq[1682]: false Jul 10 00:25:18.588351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:18.594260 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:25:18.598474 KVP[1688]: KVP starting; pid is:1688 Jul 10 00:25:18.600255 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:25:18.603294 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:25:18.608646 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:25:18.611876 kernel: hv_utils: KVP IC version 4.0 Jul 10 00:25:18.611780 KVP[1688]: KVP LIC Version: 3.1 Jul 10 00:25:18.613248 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:25:18.624348 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:25:18.625644 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:25:18.626028 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:25:18.628586 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:25:18.633304 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Refreshing passwd entry cache Jul 10 00:25:18.632813 oslogin_cache_refresh[1687]: Refreshing passwd entry cache Jul 10 00:25:18.635718 extend-filesystems[1686]: Found /dev/nvme0n1p6 Jul 10 00:25:18.635794 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:25:18.648358 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:25:18.651676 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:25:18.651850 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:25:18.655420 extend-filesystems[1686]: Found /dev/nvme0n1p9 Jul 10 00:25:18.661514 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:25:18.664552 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:25:18.664838 extend-filesystems[1686]: Checking size of /dev/nvme0n1p9 Jul 10 00:25:18.670345 oslogin_cache_refresh[1687]: Failure getting users, quitting Jul 10 00:25:18.673835 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Failure getting users, quitting Jul 10 00:25:18.673835 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:25:18.673835 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Refreshing group entry cache Jul 10 00:25:18.670360 oslogin_cache_refresh[1687]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:25:18.671639 oslogin_cache_refresh[1687]: Refreshing group entry cache Jul 10 00:25:18.679938 jq[1700]: true Jul 10 00:25:18.685983 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Failure getting groups, quitting Jul 10 00:25:18.685983 google_oslogin_nss_cache[1687]: oslogin_cache_refresh[1687]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:25:18.685675 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:25:18.684361 oslogin_cache_refresh[1687]: Failure getting groups, quitting Jul 10 00:25:18.684370 oslogin_cache_refresh[1687]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:25:18.686544 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:25:18.690028 (chronyd)[1677]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 10 00:25:18.698197 (ntainerd)[1717]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:25:18.700443 chronyd[1731]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 10 00:25:18.702354 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:25:18.702551 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:25:18.705880 chronyd[1731]: Timezone right/UTC failed leap second check, ignoring Jul 10 00:25:18.707121 systemd[1]: Started chronyd.service - NTP client/server. Jul 10 00:25:18.706022 chronyd[1731]: Loaded seccomp filter (level 2) Jul 10 00:25:18.715005 extend-filesystems[1686]: Old size kept for /dev/nvme0n1p9 Jul 10 00:25:18.711870 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:25:18.713857 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:25:18.726066 jq[1723]: true Jul 10 00:25:18.753329 update_engine[1699]: I20250710 00:25:18.753262 1699 main.cc:92] Flatcar Update Engine starting Jul 10 00:25:18.756579 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:25:18.771023 tar[1706]: linux-amd64/LICENSE Jul 10 00:25:18.771344 tar[1706]: linux-amd64/helm Jul 10 00:25:18.823994 dbus-daemon[1680]: [system] SELinux support is enabled Jul 10 00:25:18.824116 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:25:18.828870 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:25:18.828896 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:25:18.830190 systemd-logind[1698]: New seat seat0. Jul 10 00:25:18.832129 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:25:18.832148 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:25:18.832496 systemd-logind[1698]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:25:18.835230 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:25:18.846078 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:25:18.849034 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:25:18.851888 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:25:18.857046 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:25:18.859744 update_engine[1699]: I20250710 00:25:18.859697 1699 update_check_scheduler.cc:74] Next update check in 2m19s Jul 10 00:25:18.883375 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:25:18.955128 coreos-metadata[1679]: Jul 10 00:25:18.955 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 00:25:18.961828 coreos-metadata[1679]: Jul 10 00:25:18.960 INFO Fetch successful Jul 10 00:25:18.961828 coreos-metadata[1679]: Jul 10 00:25:18.960 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 10 00:25:18.964121 coreos-metadata[1679]: Jul 10 00:25:18.964 INFO Fetch successful Jul 10 00:25:18.964524 coreos-metadata[1679]: Jul 10 00:25:18.964 INFO Fetching http://168.63.129.16/machine/1835062b-bffa-4dcc-a4d3-9dab33ee04dd/88627374%2D2be9%2D4048%2D854e%2D311af3966962.%5Fci%2D4344.1.1%2Dn%2De449e01ea1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 10 00:25:18.967222 coreos-metadata[1679]: Jul 10 00:25:18.967 INFO Fetch successful Jul 10 00:25:18.968187 coreos-metadata[1679]: Jul 10 00:25:18.967 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 10 00:25:18.978147 coreos-metadata[1679]: Jul 10 00:25:18.978 INFO Fetch successful Jul 10 00:25:19.024771 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:25:19.037603 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:25:19.043470 sshd_keygen[1734]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:25:19.112012 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:25:19.118236 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:25:19.122153 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 10 00:25:19.145297 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:25:19.146217 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:25:19.151480 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:25:19.174285 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 10 00:25:19.179397 locksmithd[1776]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:25:19.179753 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:25:19.183921 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:25:19.190950 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:25:19.193656 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:25:19.474473 tar[1706]: linux-amd64/README.md Jul 10 00:25:19.491828 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:25:19.822658 containerd[1717]: time="2025-07-10T00:25:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:25:19.822916 containerd[1717]: time="2025-07-10T00:25:19.822725395Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:25:19.832107 containerd[1717]: time="2025-07-10T00:25:19.832070579Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.607µs" Jul 10 00:25:19.832227 containerd[1717]: time="2025-07-10T00:25:19.832212314Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:25:19.832276 containerd[1717]: time="2025-07-10T00:25:19.832266826Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:25:19.832444 containerd[1717]: time="2025-07-10T00:25:19.832413203Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:25:19.832444 containerd[1717]: time="2025-07-10T00:25:19.832437004Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:25:19.832516 containerd[1717]: time="2025-07-10T00:25:19.832459133Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:25:19.832538 containerd[1717]: time="2025-07-10T00:25:19.832511364Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:25:19.832538 containerd[1717]: time="2025-07-10T00:25:19.832521785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.832748127Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.832763052Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.832772929Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.832780921Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.832847713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.833023348Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.833044564Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.833054535Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.833088524Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.833348220Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:25:19.833487 containerd[1717]: time="2025-07-10T00:25:19.833391374Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:25:19.847339 containerd[1717]: time="2025-07-10T00:25:19.847295837Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:25:19.847424 containerd[1717]: time="2025-07-10T00:25:19.847351343Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:25:19.847424 containerd[1717]: time="2025-07-10T00:25:19.847365114Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:25:19.847424 containerd[1717]: time="2025-07-10T00:25:19.847378684Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:25:19.847424 containerd[1717]: time="2025-07-10T00:25:19.847390071Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:25:19.847424 containerd[1717]: time="2025-07-10T00:25:19.847399602Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:25:19.847424 containerd[1717]: time="2025-07-10T00:25:19.847417960Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:25:19.847533 containerd[1717]: time="2025-07-10T00:25:19.847429543Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:25:19.847533 containerd[1717]: time="2025-07-10T00:25:19.847440218Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:25:19.847533 containerd[1717]: time="2025-07-10T00:25:19.847449864Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:25:19.847533 containerd[1717]: time="2025-07-10T00:25:19.847459660Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:25:19.847533 containerd[1717]: time="2025-07-10T00:25:19.847474067Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847569307Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847585744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847599449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847609219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847623623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847633978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847644184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847652850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847663029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847671750Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847681142Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847745674Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847761718Z" level=info msg="Start snapshots syncer" Jul 10 00:25:19.848953 containerd[1717]: time="2025-07-10T00:25:19.847781101Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:25:19.849218 containerd[1717]: time="2025-07-10T00:25:19.847999940Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:25:19.849218 containerd[1717]: time="2025-07-10T00:25:19.848040839Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848106181Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848192686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848208444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848218315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848229001Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848239734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848248968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848258204Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848281956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848291759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848300025Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848323311Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848335867Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:25:19.849331 containerd[1717]: time="2025-07-10T00:25:19.848343821Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848354241Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848361420Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848369498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848379483Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848392798Z" level=info msg="runtime interface created" Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848397061Z" level=info msg="created NRI interface" Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848404095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848412310Z" level=info msg="Connect containerd service" Jul 10 00:25:19.849528 containerd[1717]: time="2025-07-10T00:25:19.848433053Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:25:19.851851 containerd[1717]: time="2025-07-10T00:25:19.851277936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:25:19.931828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:19.941267 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:25:20.476995 waagent[1820]: 2025-07-10T00:25:20.476752Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 10 00:25:20.479064 waagent[1820]: 2025-07-10T00:25:20.478842Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 10 00:25:20.481838 waagent[1820]: 2025-07-10T00:25:20.480653Z INFO Daemon Daemon Python: 3.11.12 Jul 10 00:25:20.481903 containerd[1717]: time="2025-07-10T00:25:20.481526978Z" level=info msg="Start subscribing containerd event" Jul 10 00:25:20.481903 containerd[1717]: time="2025-07-10T00:25:20.481585444Z" level=info msg="Start recovering state" Jul 10 00:25:20.481903 containerd[1717]: time="2025-07-10T00:25:20.481780632Z" level=info msg="Start event monitor" Jul 10 00:25:20.481903 containerd[1717]: time="2025-07-10T00:25:20.481794836Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:25:20.481903 containerd[1717]: time="2025-07-10T00:25:20.481803867Z" level=info msg="Start streaming server" Jul 10 00:25:20.481903 containerd[1717]: time="2025-07-10T00:25:20.481816181Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:25:20.482210 containerd[1717]: time="2025-07-10T00:25:20.482072434Z" level=info msg="runtime interface starting up..." Jul 10 00:25:20.482210 containerd[1717]: time="2025-07-10T00:25:20.482086405Z" level=info msg="starting plugins..." Jul 10 00:25:20.482210 containerd[1717]: time="2025-07-10T00:25:20.482101198Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:25:20.482374 waagent[1820]: 2025-07-10T00:25:20.482318Z INFO Daemon Daemon Run daemon Jul 10 00:25:20.482612 containerd[1717]: time="2025-07-10T00:25:20.482592064Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:25:20.483075 containerd[1717]: time="2025-07-10T00:25:20.482999440Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:25:20.485094 waagent[1820]: 2025-07-10T00:25:20.484123Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 10 00:25:20.488502 waagent[1820]: 2025-07-10T00:25:20.488449Z INFO Daemon Daemon Using waagent for provisioning Jul 10 00:25:20.491292 waagent[1820]: 2025-07-10T00:25:20.491254Z INFO Daemon Daemon Activate resource disk Jul 10 00:25:20.492299 containerd[1717]: time="2025-07-10T00:25:20.492278956Z" level=info msg="containerd successfully booted in 0.670493s" Jul 10 00:25:20.493073 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:25:20.496249 waagent[1820]: 2025-07-10T00:25:20.496009Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 10 00:25:20.496683 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:25:20.500854 systemd[1]: Startup finished in 3.112s (kernel) + 11.417s (initrd) + 9.519s (userspace) = 24.049s. Jul 10 00:25:20.502766 waagent[1820]: 2025-07-10T00:25:20.502488Z INFO Daemon Daemon Found device: None Jul 10 00:25:20.503341 waagent[1820]: 2025-07-10T00:25:20.503304Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 10 00:25:20.503722 waagent[1820]: 2025-07-10T00:25:20.503701Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 10 00:25:20.504410 waagent[1820]: 2025-07-10T00:25:20.504380Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 00:25:20.508668 waagent[1820]: 2025-07-10T00:25:20.508199Z INFO Daemon Daemon Running default provisioning handler Jul 10 00:25:20.525744 waagent[1820]: 2025-07-10T00:25:20.525701Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 10 00:25:20.531645 waagent[1820]: 2025-07-10T00:25:20.529740Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 10 00:25:20.533081 waagent[1820]: 2025-07-10T00:25:20.532776Z INFO Daemon Daemon cloud-init is enabled: False Jul 10 00:25:20.534479 waagent[1820]: 2025-07-10T00:25:20.534440Z INFO Daemon Daemon Copying ovf-env.xml Jul 10 00:25:20.569605 kubelet[1839]: E0710 00:25:20.569579 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:25:20.571737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:25:20.571865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:25:20.572467 systemd[1]: kubelet.service: Consumed 931ms CPU time, 264.9M memory peak. Jul 10 00:25:20.598859 waagent[1820]: 2025-07-10T00:25:20.598814Z INFO Daemon Daemon Successfully mounted dvd Jul 10 00:25:20.623766 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 10 00:25:20.624846 waagent[1820]: 2025-07-10T00:25:20.624800Z INFO Daemon Daemon Detect protocol endpoint Jul 10 00:25:20.626305 waagent[1820]: 2025-07-10T00:25:20.626222Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 00:25:20.627900 waagent[1820]: 2025-07-10T00:25:20.627870Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 10 00:25:20.629590 waagent[1820]: 2025-07-10T00:25:20.629561Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 10 00:25:20.631109 waagent[1820]: 2025-07-10T00:25:20.631081Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 10 00:25:20.632713 waagent[1820]: 2025-07-10T00:25:20.632668Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 10 00:25:20.642945 waagent[1820]: 2025-07-10T00:25:20.642915Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 10 00:25:20.644111 waagent[1820]: 2025-07-10T00:25:20.643613Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 10 00:25:20.644111 waagent[1820]: 2025-07-10T00:25:20.643852Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 10 00:25:20.723096 login[1822]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:25:20.725028 login[1823]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:25:20.736191 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:25:20.738342 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:25:20.741646 waagent[1820]: 2025-07-10T00:25:20.741596Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 10 00:25:20.745023 waagent[1820]: 2025-07-10T00:25:20.744520Z INFO Daemon Daemon Forcing an update of the goal state. Jul 10 00:25:20.745360 systemd-logind[1698]: New session 1 of user core. Jul 10 00:25:20.749134 waagent[1820]: 2025-07-10T00:25:20.748843Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 00:25:20.749967 systemd-logind[1698]: New session 2 of user core. Jul 10 00:25:20.764140 waagent[1820]: 2025-07-10T00:25:20.764100Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 10 00:25:20.765417 waagent[1820]: 2025-07-10T00:25:20.764948Z INFO Daemon Jul 10 00:25:20.765417 waagent[1820]: 2025-07-10T00:25:20.765479Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 00fa1275-de73-4236-9889-87ce663f944a eTag: 11312177249518347773 source: Fabric] Jul 10 00:25:20.765417 waagent[1820]: 2025-07-10T00:25:20.766039Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 10 00:25:20.765417 waagent[1820]: 2025-07-10T00:25:20.766363Z INFO Daemon Jul 10 00:25:20.765417 waagent[1820]: 2025-07-10T00:25:20.766568Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 10 00:25:20.765417 waagent[1820]: 2025-07-10T00:25:20.771027Z INFO Daemon Daemon Downloading artifacts profile blob Jul 10 00:25:20.780926 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:25:20.783132 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:25:20.790754 (systemd)[1873]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:25:20.792388 systemd-logind[1698]: New session c1 of user core. Jul 10 00:25:20.867185 waagent[1820]: 2025-07-10T00:25:20.866664Z INFO Daemon Downloaded certificate {'thumbprint': '81970EC35E0524EDF6AB079ECC134946CD226589', 'hasPrivateKey': True} Jul 10 00:25:20.871022 waagent[1820]: 2025-07-10T00:25:20.870989Z INFO Daemon Fetch goal state completed Jul 10 00:25:20.878130 waagent[1820]: 2025-07-10T00:25:20.878103Z INFO Daemon Daemon Starting provisioning Jul 10 00:25:20.880025 waagent[1820]: 2025-07-10T00:25:20.879993Z INFO Daemon Daemon Handle ovf-env.xml. Jul 10 00:25:20.881542 waagent[1820]: 2025-07-10T00:25:20.881460Z INFO Daemon Daemon Set hostname [ci-4344.1.1-n-e449e01ea1] Jul 10 00:25:20.897192 waagent[1820]: 2025-07-10T00:25:20.897134Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-n-e449e01ea1] Jul 10 00:25:20.899286 waagent[1820]: 2025-07-10T00:25:20.899251Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 10 00:25:20.902180 waagent[1820]: 2025-07-10T00:25:20.901650Z INFO Daemon Daemon Primary interface is [eth0] Jul 10 00:25:20.909751 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:20.909764 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:25:20.909784 systemd-networkd[1359]: eth0: DHCP lease lost Jul 10 00:25:20.910924 waagent[1820]: 2025-07-10T00:25:20.910877Z INFO Daemon Daemon Create user account if not exists Jul 10 00:25:20.913027 waagent[1820]: 2025-07-10T00:25:20.912990Z INFO Daemon Daemon User core already exists, skip useradd Jul 10 00:25:20.916177 waagent[1820]: 2025-07-10T00:25:20.915123Z INFO Daemon Daemon Configure sudoer Jul 10 00:25:20.920507 waagent[1820]: 2025-07-10T00:25:20.920462Z INFO Daemon Daemon Configure sshd Jul 10 00:25:20.925724 waagent[1820]: 2025-07-10T00:25:20.925679Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 10 00:25:20.930948 waagent[1820]: 2025-07-10T00:25:20.930899Z INFO Daemon Daemon Deploy ssh public key. Jul 10 00:25:20.932146 systemd[1873]: Queued start job for default target default.target. Jul 10 00:25:20.932938 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:25:20.934951 systemd[1873]: Created slice app.slice - User Application Slice. Jul 10 00:25:20.934974 systemd[1873]: Reached target paths.target - Paths. Jul 10 00:25:20.935368 systemd[1873]: Reached target timers.target - Timers. Jul 10 00:25:20.936532 systemd[1873]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:25:20.944551 systemd[1873]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:25:20.944599 systemd[1873]: Reached target sockets.target - Sockets. Jul 10 00:25:20.944639 systemd[1873]: Reached target basic.target - Basic System. Jul 10 00:25:20.944695 systemd[1873]: Reached target default.target - Main User Target. Jul 10 00:25:20.944715 systemd[1873]: Startup finished in 148ms. Jul 10 00:25:20.944735 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:25:20.945924 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:25:20.946699 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:25:22.002741 waagent[1820]: 2025-07-10T00:25:22.002683Z INFO Daemon Daemon Provisioning complete Jul 10 00:25:22.016673 waagent[1820]: 2025-07-10T00:25:22.016640Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 10 00:25:22.017328 waagent[1820]: 2025-07-10T00:25:22.017237Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 10 00:25:22.017391 waagent[1820]: 2025-07-10T00:25:22.017366Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 10 00:25:22.114412 waagent[1914]: 2025-07-10T00:25:22.114345Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 10 00:25:22.114653 waagent[1914]: 2025-07-10T00:25:22.114434Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 10 00:25:22.114653 waagent[1914]: 2025-07-10T00:25:22.114468Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 10 00:25:22.114653 waagent[1914]: 2025-07-10T00:25:22.114502Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 10 00:25:22.154331 waagent[1914]: 2025-07-10T00:25:22.154287Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 10 00:25:22.154445 waagent[1914]: 2025-07-10T00:25:22.154424Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:25:22.154489 waagent[1914]: 2025-07-10T00:25:22.154473Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:25:22.160189 waagent[1914]: 2025-07-10T00:25:22.160131Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 00:25:22.179115 waagent[1914]: 2025-07-10T00:25:22.179086Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 10 00:25:22.179431 waagent[1914]: 2025-07-10T00:25:22.179407Z INFO ExtHandler Jul 10 00:25:22.179480 waagent[1914]: 2025-07-10T00:25:22.179454Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2df27552-ade6-4e96-a033-1dbbe7d2edef eTag: 11312177249518347773 source: Fabric] Jul 10 00:25:22.179647 waagent[1914]: 2025-07-10T00:25:22.179625Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 10 00:25:22.179934 waagent[1914]: 2025-07-10T00:25:22.179912Z INFO ExtHandler Jul 10 00:25:22.179965 waagent[1914]: 2025-07-10T00:25:22.179948Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 10 00:25:22.184901 waagent[1914]: 2025-07-10T00:25:22.184877Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 10 00:25:22.271432 waagent[1914]: 2025-07-10T00:25:22.271357Z INFO ExtHandler Downloaded certificate {'thumbprint': '81970EC35E0524EDF6AB079ECC134946CD226589', 'hasPrivateKey': True} Jul 10 00:25:22.271710 waagent[1914]: 2025-07-10T00:25:22.271682Z INFO ExtHandler Fetch goal state completed Jul 10 00:25:22.289006 waagent[1914]: 2025-07-10T00:25:22.288963Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 10 00:25:22.293031 waagent[1914]: 2025-07-10T00:25:22.292982Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1914 Jul 10 00:25:22.293137 waagent[1914]: 2025-07-10T00:25:22.293098Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 10 00:25:22.293391 waagent[1914]: 2025-07-10T00:25:22.293367Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 10 00:25:22.294324 waagent[1914]: 2025-07-10T00:25:22.294298Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 10 00:25:22.294596 waagent[1914]: 2025-07-10T00:25:22.294572Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 10 00:25:22.294697 waagent[1914]: 2025-07-10T00:25:22.294679Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 10 00:25:22.295059 waagent[1914]: 2025-07-10T00:25:22.295038Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 10 00:25:22.326322 waagent[1914]: 2025-07-10T00:25:22.326300Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 10 00:25:22.326440 waagent[1914]: 2025-07-10T00:25:22.326421Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 10 00:25:22.331507 waagent[1914]: 2025-07-10T00:25:22.331355Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 10 00:25:22.336203 systemd[1]: Reload requested from client PID 1929 ('systemctl') (unit waagent.service)... Jul 10 00:25:22.336215 systemd[1]: Reloading... Jul 10 00:25:22.420200 zram_generator::config[1970]: No configuration found. Jul 10 00:25:22.493008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:25:22.583404 systemd[1]: Reloading finished in 246 ms. Jul 10 00:25:22.599177 waagent[1914]: 2025-07-10T00:25:22.598983Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 10 00:25:22.599177 waagent[1914]: 2025-07-10T00:25:22.599094Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 10 00:25:22.691172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#271 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jul 10 00:25:22.862603 waagent[1914]: 2025-07-10T00:25:22.862505Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 10 00:25:22.862810 waagent[1914]: 2025-07-10T00:25:22.862786Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 10 00:25:22.863406 waagent[1914]: 2025-07-10T00:25:22.863372Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 10 00:25:22.863734 waagent[1914]: 2025-07-10T00:25:22.863711Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:25:22.863788 waagent[1914]: 2025-07-10T00:25:22.863758Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 10 00:25:22.863838 waagent[1914]: 2025-07-10T00:25:22.863821Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:25:22.864067 waagent[1914]: 2025-07-10T00:25:22.864046Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 10 00:25:22.864136 waagent[1914]: 2025-07-10T00:25:22.864113Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:25:22.864521 waagent[1914]: 2025-07-10T00:25:22.864488Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 10 00:25:22.864615 waagent[1914]: 2025-07-10T00:25:22.864583Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:25:22.864730 waagent[1914]: 2025-07-10T00:25:22.864708Z INFO EnvHandler ExtHandler Configure routes Jul 10 00:25:22.864767 waagent[1914]: 2025-07-10T00:25:22.864754Z INFO EnvHandler ExtHandler Gateway:None Jul 10 00:25:22.864810 waagent[1914]: 2025-07-10T00:25:22.864786Z INFO EnvHandler ExtHandler Routes:None Jul 10 00:25:22.864856 waagent[1914]: 2025-07-10T00:25:22.864805Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 10 00:25:22.865438 waagent[1914]: 2025-07-10T00:25:22.865394Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 10 00:25:22.865538 waagent[1914]: 2025-07-10T00:25:22.865512Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 10 00:25:22.865538 waagent[1914]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 10 00:25:22.865538 waagent[1914]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 10 00:25:22.865538 waagent[1914]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 10 00:25:22.865538 waagent[1914]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:25:22.865538 waagent[1914]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:25:22.865538 waagent[1914]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:25:22.865817 waagent[1914]: 2025-07-10T00:25:22.865795Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 10 00:25:22.866604 waagent[1914]: 2025-07-10T00:25:22.866576Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 10 00:25:22.874809 waagent[1914]: 2025-07-10T00:25:22.874781Z INFO ExtHandler ExtHandler Jul 10 00:25:22.874883 waagent[1914]: 2025-07-10T00:25:22.874835Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fca68a3a-269b-4c72-a647-f700d0a63815 correlation 98c895d8-b13e-40cc-aa82-b8f9b1a1d5a3 created: 2025-07-10T00:24:28.467018Z] Jul 10 00:25:22.875110 waagent[1914]: 2025-07-10T00:25:22.875086Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 10 00:25:22.875525 waagent[1914]: 2025-07-10T00:25:22.875503Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 10 00:25:22.901142 waagent[1914]: 2025-07-10T00:25:22.901103Z INFO MonitorHandler ExtHandler Network interfaces: Jul 10 00:25:22.901142 waagent[1914]: Executing ['ip', '-a', '-o', 'link']: Jul 10 00:25:22.901142 waagent[1914]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 10 00:25:22.901142 waagent[1914]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:20:c0:91 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 10 00:25:22.901142 waagent[1914]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:20:c0:91 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 10 00:25:22.901142 waagent[1914]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 10 00:25:22.901142 waagent[1914]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 10 00:25:22.901142 waagent[1914]: 2: eth0 inet 10.200.8.13/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 10 00:25:22.901142 waagent[1914]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 10 00:25:22.901142 waagent[1914]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 10 00:25:22.901142 waagent[1914]: 2: eth0 inet6 fe80::7e1e:52ff:fe20:c091/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 00:25:22.901142 waagent[1914]: 3: enP30832s1 inet6 fe80::7e1e:52ff:fe20:c091/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 00:25:22.903188 waagent[1914]: 2025-07-10T00:25:22.902878Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 10 00:25:22.903188 waagent[1914]: Try `iptables -h' or 'iptables --help' for more information.) Jul 10 00:25:22.903332 waagent[1914]: 2025-07-10T00:25:22.903311Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 55A87421-ACB9-4C38-81D3-21E18339D063;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 10 00:25:22.945766 waagent[1914]: 2025-07-10T00:25:22.945722Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 10 00:25:22.945766 waagent[1914]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:25:22.945766 waagent[1914]: pkts bytes target prot opt in out source destination Jul 10 00:25:22.945766 waagent[1914]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:25:22.945766 waagent[1914]: pkts bytes target prot opt in out source destination Jul 10 00:25:22.945766 waagent[1914]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:25:22.945766 waagent[1914]: pkts bytes target prot opt in out source destination Jul 10 00:25:22.945766 waagent[1914]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 00:25:22.945766 waagent[1914]: 3 535 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 00:25:22.945766 waagent[1914]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 00:25:22.948503 waagent[1914]: 2025-07-10T00:25:22.948462Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 10 00:25:22.948503 waagent[1914]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:25:22.948503 waagent[1914]: pkts bytes target prot opt in out source destination Jul 10 00:25:22.948503 waagent[1914]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:25:22.948503 waagent[1914]: pkts bytes target prot opt in out source destination Jul 10 00:25:22.948503 waagent[1914]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:25:22.948503 waagent[1914]: pkts bytes target prot opt in out source destination Jul 10 00:25:22.948503 waagent[1914]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 00:25:22.948503 waagent[1914]: 4 587 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 00:25:22.948503 waagent[1914]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 00:25:30.822822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:25:30.824521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:31.392217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:31.396439 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:25:31.427972 kubelet[2065]: E0710 00:25:31.427938 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:25:31.430733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:25:31.430856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:25:31.431155 systemd[1]: kubelet.service: Consumed 129ms CPU time, 107.8M memory peak. Jul 10 00:25:41.553018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:25:41.554784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:42.042056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:42.046467 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:25:42.075770 kubelet[2080]: E0710 00:25:42.075740 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:25:42.077217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:25:42.077339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:25:42.077639 systemd[1]: kubelet.service: Consumed 122ms CPU time, 108.5M memory peak. Jul 10 00:25:42.491192 chronyd[1731]: Selected source PHC0 Jul 10 00:25:45.258594 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:25:45.259714 systemd[1]: Started sshd@0-10.200.8.13:22-10.200.16.10:51778.service - OpenSSH per-connection server daemon (10.200.16.10:51778). Jul 10 00:25:45.976580 sshd[2089]: Accepted publickey for core from 10.200.16.10 port 51778 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:25:45.977952 sshd-session[2089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:45.982772 systemd-logind[1698]: New session 3 of user core. Jul 10 00:25:45.989303 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:25:46.532547 systemd[1]: Started sshd@1-10.200.8.13:22-10.200.16.10:51794.service - OpenSSH per-connection server daemon (10.200.16.10:51794). Jul 10 00:25:47.166416 sshd[2094]: Accepted publickey for core from 10.200.16.10 port 51794 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:25:47.167775 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:47.172415 systemd-logind[1698]: New session 4 of user core. Jul 10 00:25:47.181324 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:25:47.609978 sshd[2096]: Connection closed by 10.200.16.10 port 51794 Jul 10 00:25:47.610559 sshd-session[2094]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:47.614016 systemd[1]: sshd@1-10.200.8.13:22-10.200.16.10:51794.service: Deactivated successfully. Jul 10 00:25:47.615427 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:25:47.616077 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:25:47.617271 systemd-logind[1698]: Removed session 4. Jul 10 00:25:47.725093 systemd[1]: Started sshd@2-10.200.8.13:22-10.200.16.10:51798.service - OpenSSH per-connection server daemon (10.200.16.10:51798). Jul 10 00:25:48.356914 sshd[2102]: Accepted publickey for core from 10.200.16.10 port 51798 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:25:48.358149 sshd-session[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:48.362661 systemd-logind[1698]: New session 5 of user core. Jul 10 00:25:48.371324 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:25:48.799644 sshd[2104]: Connection closed by 10.200.16.10 port 51798 Jul 10 00:25:48.800401 sshd-session[2102]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:48.804017 systemd[1]: sshd@2-10.200.8.13:22-10.200.16.10:51798.service: Deactivated successfully. Jul 10 00:25:48.805392 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:25:48.805976 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:25:48.807079 systemd-logind[1698]: Removed session 5. Jul 10 00:25:48.916474 systemd[1]: Started sshd@3-10.200.8.13:22-10.200.16.10:51802.service - OpenSSH per-connection server daemon (10.200.16.10:51802). Jul 10 00:25:49.552836 sshd[2110]: Accepted publickey for core from 10.200.16.10 port 51802 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:25:49.554143 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:49.558937 systemd-logind[1698]: New session 6 of user core. Jul 10 00:25:49.569282 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:25:49.996367 sshd[2112]: Connection closed by 10.200.16.10 port 51802 Jul 10 00:25:49.997205 sshd-session[2110]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:49.999997 systemd[1]: sshd@3-10.200.8.13:22-10.200.16.10:51802.service: Deactivated successfully. Jul 10 00:25:50.001536 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:25:50.002757 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:25:50.003794 systemd-logind[1698]: Removed session 6. Jul 10 00:25:50.119445 systemd[1]: Started sshd@4-10.200.8.13:22-10.200.16.10:44012.service - OpenSSH per-connection server daemon (10.200.16.10:44012). Jul 10 00:25:50.749681 sshd[2118]: Accepted publickey for core from 10.200.16.10 port 44012 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:25:50.750964 sshd-session[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:50.755443 systemd-logind[1698]: New session 7 of user core. Jul 10 00:25:50.764313 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:25:51.203136 sudo[2121]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:25:51.203360 sudo[2121]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:25:51.217938 sudo[2121]: pam_unix(sudo:session): session closed for user root Jul 10 00:25:51.319860 sshd[2120]: Connection closed by 10.200.16.10 port 44012 Jul 10 00:25:51.320576 sshd-session[2118]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:51.323794 systemd[1]: sshd@4-10.200.8.13:22-10.200.16.10:44012.service: Deactivated successfully. Jul 10 00:25:51.325223 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:25:51.326759 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:25:51.327604 systemd-logind[1698]: Removed session 7. Jul 10 00:25:51.443744 systemd[1]: Started sshd@5-10.200.8.13:22-10.200.16.10:44016.service - OpenSSH per-connection server daemon (10.200.16.10:44016). Jul 10 00:25:52.082448 sshd[2127]: Accepted publickey for core from 10.200.16.10 port 44016 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:25:52.083809 sshd-session[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:52.084944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:25:52.087260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:52.090045 systemd-logind[1698]: New session 8 of user core. Jul 10 00:25:52.097433 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:25:52.424799 sudo[2134]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:25:52.425016 sudo[2134]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:25:52.593222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:52.597476 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:25:52.599802 sudo[2134]: pam_unix(sudo:session): session closed for user root Jul 10 00:25:52.605551 sudo[2133]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:25:52.606024 sudo[2133]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:25:52.617520 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:25:52.651089 kubelet[2141]: E0710 00:25:52.651038 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:25:52.652532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:25:52.652650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:25:52.653107 systemd[1]: kubelet.service: Consumed 132ms CPU time, 109.9M memory peak. Jul 10 00:25:52.706958 augenrules[2168]: No rules Jul 10 00:25:52.707865 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:25:52.708059 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:25:52.709274 sudo[2133]: pam_unix(sudo:session): session closed for user root Jul 10 00:25:52.808731 sshd[2132]: Connection closed by 10.200.16.10 port 44016 Jul 10 00:25:52.809278 sshd-session[2127]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:52.812245 systemd[1]: sshd@5-10.200.8.13:22-10.200.16.10:44016.service: Deactivated successfully. Jul 10 00:25:52.813522 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:25:52.815000 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:25:52.815864 systemd-logind[1698]: Removed session 8. Jul 10 00:25:52.923440 systemd[1]: Started sshd@6-10.200.8.13:22-10.200.16.10:44030.service - OpenSSH per-connection server daemon (10.200.16.10:44030). Jul 10 00:25:53.555362 sshd[2177]: Accepted publickey for core from 10.200.16.10 port 44030 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:25:53.556651 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:53.561309 systemd-logind[1698]: New session 9 of user core. Jul 10 00:25:53.575308 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:25:53.898791 sudo[2180]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:25:53.899039 sudo[2180]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:25:54.916864 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:25:54.929435 (dockerd)[2199]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:25:55.729864 dockerd[2199]: time="2025-07-10T00:25:55.729806997Z" level=info msg="Starting up" Jul 10 00:25:55.730924 dockerd[2199]: time="2025-07-10T00:25:55.730891374Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:25:55.810199 systemd[1]: var-lib-docker-metacopy\x2dcheck3666023884-merged.mount: Deactivated successfully. Jul 10 00:25:55.832644 dockerd[2199]: time="2025-07-10T00:25:55.832608897Z" level=info msg="Loading containers: start." Jul 10 00:25:55.858209 kernel: Initializing XFRM netlink socket Jul 10 00:25:56.076964 systemd-networkd[1359]: docker0: Link UP Jul 10 00:25:56.088725 dockerd[2199]: time="2025-07-10T00:25:56.088686714Z" level=info msg="Loading containers: done." Jul 10 00:25:56.113659 dockerd[2199]: time="2025-07-10T00:25:56.113625498Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:25:56.113779 dockerd[2199]: time="2025-07-10T00:25:56.113706091Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:25:56.113807 dockerd[2199]: time="2025-07-10T00:25:56.113791709Z" level=info msg="Initializing buildkit" Jul 10 00:25:56.148846 dockerd[2199]: time="2025-07-10T00:25:56.148817202Z" level=info msg="Completed buildkit initialization" Jul 10 00:25:56.154897 dockerd[2199]: time="2025-07-10T00:25:56.154865013Z" level=info msg="Daemon has completed initialization" Jul 10 00:25:56.155090 dockerd[2199]: time="2025-07-10T00:25:56.154921892Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:25:56.155084 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:25:57.223277 containerd[1717]: time="2025-07-10T00:25:57.223230658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 00:25:57.830618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202698760.mount: Deactivated successfully. Jul 10 00:25:58.947176 containerd[1717]: time="2025-07-10T00:25:58.947128806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:58.949082 containerd[1717]: time="2025-07-10T00:25:58.949047981Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 10 00:25:58.951418 containerd[1717]: time="2025-07-10T00:25:58.951375277Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:58.954446 containerd[1717]: time="2025-07-10T00:25:58.954407807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:58.955169 containerd[1717]: time="2025-07-10T00:25:58.954903407Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.731627366s" Jul 10 00:25:58.955169 containerd[1717]: time="2025-07-10T00:25:58.954937276Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 10 00:25:58.955514 containerd[1717]: time="2025-07-10T00:25:58.955499682Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 00:26:00.117361 containerd[1717]: time="2025-07-10T00:26:00.117315884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:00.119438 containerd[1717]: time="2025-07-10T00:26:00.119406698Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 10 00:26:00.121857 containerd[1717]: time="2025-07-10T00:26:00.121817037Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:00.125330 containerd[1717]: time="2025-07-10T00:26:00.125287537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:00.125990 containerd[1717]: time="2025-07-10T00:26:00.125865271Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.170293074s" Jul 10 00:26:00.125990 containerd[1717]: time="2025-07-10T00:26:00.125899399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 10 00:26:00.126487 containerd[1717]: time="2025-07-10T00:26:00.126437463Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 00:26:01.211547 containerd[1717]: time="2025-07-10T00:26:01.211499471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:01.213564 containerd[1717]: time="2025-07-10T00:26:01.213534675Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 10 00:26:01.215890 containerd[1717]: time="2025-07-10T00:26:01.215852420Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:01.219369 containerd[1717]: time="2025-07-10T00:26:01.219326223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:01.220176 containerd[1717]: time="2025-07-10T00:26:01.219963441Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.093499635s" Jul 10 00:26:01.220176 containerd[1717]: time="2025-07-10T00:26:01.219995956Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 10 00:26:01.220707 containerd[1717]: time="2025-07-10T00:26:01.220684758Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 00:26:02.138878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1162960719.mount: Deactivated successfully. Jul 10 00:26:02.484025 containerd[1717]: time="2025-07-10T00:26:02.483919656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:02.486075 containerd[1717]: time="2025-07-10T00:26:02.486050417Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 10 00:26:02.488946 containerd[1717]: time="2025-07-10T00:26:02.488897090Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:02.493045 containerd[1717]: time="2025-07-10T00:26:02.492981712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:02.493580 containerd[1717]: time="2025-07-10T00:26:02.493461972Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.272751208s" Jul 10 00:26:02.493580 containerd[1717]: time="2025-07-10T00:26:02.493491477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 10 00:26:02.493998 containerd[1717]: time="2025-07-10T00:26:02.493976200Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:26:02.802706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 00:26:02.804210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:03.117178 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 10 00:26:03.231126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:03.235453 (kubelet)[2475]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:26:03.268213 kubelet[2475]: E0710 00:26:03.268150 2475 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:26:03.269474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:26:03.269569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:26:03.269880 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.7M memory peak. Jul 10 00:26:03.446219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584706065.mount: Deactivated successfully. Jul 10 00:26:04.255028 containerd[1717]: time="2025-07-10T00:26:04.254983836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:04.257947 containerd[1717]: time="2025-07-10T00:26:04.257911672Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 10 00:26:04.260444 containerd[1717]: time="2025-07-10T00:26:04.260404660Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:04.263882 containerd[1717]: time="2025-07-10T00:26:04.263840213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:04.264561 containerd[1717]: time="2025-07-10T00:26:04.264452105Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.770439679s" Jul 10 00:26:04.264561 containerd[1717]: time="2025-07-10T00:26:04.264479688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:26:04.264986 containerd[1717]: time="2025-07-10T00:26:04.264969778Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:26:04.459868 update_engine[1699]: I20250710 00:26:04.459807 1699 update_attempter.cc:509] Updating boot flags... Jul 10 00:26:04.779345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110828873.mount: Deactivated successfully. Jul 10 00:26:04.795988 containerd[1717]: time="2025-07-10T00:26:04.795949058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:26:04.798005 containerd[1717]: time="2025-07-10T00:26:04.797966746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 10 00:26:04.800353 containerd[1717]: time="2025-07-10T00:26:04.800312744Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:26:04.803793 containerd[1717]: time="2025-07-10T00:26:04.803743298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:26:04.804479 containerd[1717]: time="2025-07-10T00:26:04.804138446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 539.089109ms" Jul 10 00:26:04.804479 containerd[1717]: time="2025-07-10T00:26:04.804178966Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:26:04.804668 containerd[1717]: time="2025-07-10T00:26:04.804655240Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 00:26:05.356806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4150806913.mount: Deactivated successfully. Jul 10 00:26:06.873253 containerd[1717]: time="2025-07-10T00:26:06.873208920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:06.875274 containerd[1717]: time="2025-07-10T00:26:06.875242134Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 10 00:26:06.878170 containerd[1717]: time="2025-07-10T00:26:06.878120712Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:06.881498 containerd[1717]: time="2025-07-10T00:26:06.881436661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:06.882368 containerd[1717]: time="2025-07-10T00:26:06.882256866Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.077542294s" Jul 10 00:26:06.882368 containerd[1717]: time="2025-07-10T00:26:06.882284209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 10 00:26:08.775564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:08.775709 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.7M memory peak. Jul 10 00:26:08.777751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:08.802028 systemd[1]: Reload requested from client PID 2653 ('systemctl') (unit session-9.scope)... Jul 10 00:26:08.802127 systemd[1]: Reloading... Jul 10 00:26:08.902183 zram_generator::config[2702]: No configuration found. Jul 10 00:26:08.997104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:26:09.091835 systemd[1]: Reloading finished in 289 ms. Jul 10 00:26:09.119332 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:26:09.119435 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:26:09.119693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:09.119735 systemd[1]: kubelet.service: Consumed 75ms CPU time, 78.8M memory peak. Jul 10 00:26:09.121649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:09.768128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:09.776431 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:26:09.809965 kubelet[2766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:26:09.809965 kubelet[2766]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:26:09.809965 kubelet[2766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:26:09.811839 kubelet[2766]: I0710 00:26:09.811790 2766 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:26:09.955941 kubelet[2766]: I0710 00:26:09.955912 2766 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:26:09.955941 kubelet[2766]: I0710 00:26:09.955932 2766 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:26:09.956132 kubelet[2766]: I0710 00:26:09.956120 2766 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:26:09.984579 kubelet[2766]: E0710 00:26:09.984551 2766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:09.986938 kubelet[2766]: I0710 00:26:09.986916 2766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:26:09.994571 kubelet[2766]: I0710 00:26:09.994548 2766 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:26:09.996506 kubelet[2766]: I0710 00:26:09.996487 2766 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:26:09.997831 kubelet[2766]: I0710 00:26:09.997809 2766 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:26:09.997972 kubelet[2766]: I0710 00:26:09.997831 2766 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-e449e01ea1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:26:09.998078 kubelet[2766]: I0710 00:26:09.997979 2766 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:26:09.998078 kubelet[2766]: I0710 00:26:09.997989 2766 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:26:09.998125 kubelet[2766]: I0710 00:26:09.998084 2766 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:26:10.000997 kubelet[2766]: I0710 00:26:10.000984 2766 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:26:10.001049 kubelet[2766]: I0710 00:26:10.001005 2766 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:26:10.001049 kubelet[2766]: I0710 00:26:10.001026 2766 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:26:10.001049 kubelet[2766]: I0710 00:26:10.001035 2766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:26:10.004623 kubelet[2766]: W0710 00:26:10.004298 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Jul 10 00:26:10.004623 kubelet[2766]: E0710 00:26:10.004343 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.004623 kubelet[2766]: W0710 00:26:10.004565 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-e449e01ea1&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Jul 10 00:26:10.004623 kubelet[2766]: E0710 00:26:10.004598 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-e449e01ea1&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.005067 kubelet[2766]: I0710 00:26:10.005053 2766 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:26:10.005429 kubelet[2766]: I0710 00:26:10.005417 2766 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:26:10.006096 kubelet[2766]: W0710 00:26:10.006081 2766 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:26:10.008900 kubelet[2766]: I0710 00:26:10.008777 2766 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:26:10.008900 kubelet[2766]: I0710 00:26:10.008813 2766 server.go:1287] "Started kubelet" Jul 10 00:26:10.013183 kubelet[2766]: I0710 00:26:10.011247 2766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:26:10.013183 kubelet[2766]: I0710 00:26:10.013086 2766 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:26:10.014570 kubelet[2766]: I0710 00:26:10.014552 2766 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:26:10.016275 kubelet[2766]: I0710 00:26:10.016255 2766 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:26:10.016572 kubelet[2766]: E0710 00:26:10.016555 2766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" Jul 10 00:26:10.016871 kubelet[2766]: I0710 00:26:10.016858 2766 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:26:10.016916 kubelet[2766]: I0710 00:26:10.016901 2766 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:26:10.017948 kubelet[2766]: I0710 00:26:10.017902 2766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:26:10.018213 kubelet[2766]: I0710 00:26:10.018202 2766 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:26:10.024342 kubelet[2766]: I0710 00:26:10.023513 2766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:26:10.024342 kubelet[2766]: I0710 00:26:10.023764 2766 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:26:10.024342 kubelet[2766]: I0710 00:26:10.023847 2766 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:26:10.027077 kubelet[2766]: W0710 00:26:10.027025 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Jul 10 00:26:10.027150 kubelet[2766]: E0710 00:26:10.027078 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.027211 kubelet[2766]: E0710 00:26:10.027139 2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-e449e01ea1?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="200ms" Jul 10 00:26:10.031179 kubelet[2766]: E0710 00:26:10.028547 2766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-e449e01ea1.1850bc3024ca05d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-e449e01ea1,UID:ci-4344.1.1-n-e449e01ea1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-e449e01ea1,},FirstTimestamp:2025-07-10 00:26:10.008794579 +0000 UTC m=+0.229171089,LastTimestamp:2025-07-10 00:26:10.008794579 +0000 UTC m=+0.229171089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-e449e01ea1,}" Jul 10 00:26:10.032345 kubelet[2766]: I0710 00:26:10.032324 2766 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:26:10.032486 kubelet[2766]: I0710 00:26:10.032470 2766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:26:10.033383 kubelet[2766]: I0710 00:26:10.033366 2766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:26:10.033383 kubelet[2766]: I0710 00:26:10.033385 2766 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:26:10.033457 kubelet[2766]: I0710 00:26:10.033400 2766 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:26:10.033457 kubelet[2766]: I0710 00:26:10.033406 2766 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:26:10.033457 kubelet[2766]: E0710 00:26:10.033435 2766 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:26:10.040572 kubelet[2766]: W0710 00:26:10.040551 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Jul 10 00:26:10.040643 kubelet[2766]: E0710 00:26:10.040580 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.055336 kubelet[2766]: I0710 00:26:10.055322 2766 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:26:10.055397 kubelet[2766]: I0710 00:26:10.055344 2766 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:26:10.055397 kubelet[2766]: I0710 00:26:10.055357 2766 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:26:10.062164 kubelet[2766]: I0710 00:26:10.062147 2766 policy_none.go:49] "None policy: Start" Jul 10 00:26:10.062220 kubelet[2766]: I0710 00:26:10.062176 2766 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:26:10.062220 kubelet[2766]: I0710 00:26:10.062186 2766 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:26:10.070324 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:26:10.082951 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:26:10.085733 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:26:10.093600 kubelet[2766]: I0710 00:26:10.093581 2766 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:26:10.093713 kubelet[2766]: I0710 00:26:10.093703 2766 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:26:10.093744 kubelet[2766]: I0710 00:26:10.093712 2766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:26:10.094052 kubelet[2766]: I0710 00:26:10.094043 2766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:26:10.095085 kubelet[2766]: E0710 00:26:10.095067 2766 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:26:10.095133 kubelet[2766]: E0710 00:26:10.095103 2766 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-e449e01ea1\" not found" Jul 10 00:26:10.141287 systemd[1]: Created slice kubepods-burstable-pod154ba0481b935c8a5cf3251c832c01b0.slice - libcontainer container kubepods-burstable-pod154ba0481b935c8a5cf3251c832c01b0.slice. Jul 10 00:26:10.158029 kubelet[2766]: E0710 00:26:10.158008 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.160525 systemd[1]: Created slice kubepods-burstable-podca18dc824e049b290d4a590dc8fb8b26.slice - libcontainer container kubepods-burstable-podca18dc824e049b290d4a590dc8fb8b26.slice. Jul 10 00:26:10.168011 kubelet[2766]: E0710 00:26:10.167994 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.170222 systemd[1]: Created slice kubepods-burstable-podc69fc7eb6099ecd5b703ef533b1fa44b.slice - libcontainer container kubepods-burstable-podc69fc7eb6099ecd5b703ef533b1fa44b.slice. Jul 10 00:26:10.171768 kubelet[2766]: E0710 00:26:10.171749 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.194836 kubelet[2766]: I0710 00:26:10.194822 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.195130 kubelet[2766]: E0710 00:26:10.195112 2766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.227582 kubelet[2766]: E0710 00:26:10.227559 2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-e449e01ea1?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="400ms" Jul 10 00:26:10.317346 kubelet[2766]: I0710 00:26:10.317313 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.317346 kubelet[2766]: I0710 00:26:10.317357 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca18dc824e049b290d4a590dc8fb8b26-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-e449e01ea1\" (UID: \"ca18dc824e049b290d4a590dc8fb8b26\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.317601 kubelet[2766]: I0710 00:26:10.317380 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca18dc824e049b290d4a590dc8fb8b26-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-e449e01ea1\" (UID: \"ca18dc824e049b290d4a590dc8fb8b26\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.317601 kubelet[2766]: I0710 00:26:10.317413 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.317601 kubelet[2766]: I0710 00:26:10.317437 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.317601 kubelet[2766]: I0710 00:26:10.317459 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.317601 kubelet[2766]: I0710 00:26:10.317483 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/154ba0481b935c8a5cf3251c832c01b0-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-e449e01ea1\" (UID: \"154ba0481b935c8a5cf3251c832c01b0\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.317739 kubelet[2766]: I0710 00:26:10.317505 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca18dc824e049b290d4a590dc8fb8b26-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-e449e01ea1\" (UID: \"ca18dc824e049b290d4a590dc8fb8b26\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.317739 kubelet[2766]: I0710 00:26:10.317526 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.396753 kubelet[2766]: I0710 00:26:10.396723 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.397080 kubelet[2766]: E0710 00:26:10.397054 2766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.459022 containerd[1717]: time="2025-07-10T00:26:10.458978770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-e449e01ea1,Uid:154ba0481b935c8a5cf3251c832c01b0,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:10.469456 containerd[1717]: time="2025-07-10T00:26:10.469430100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-e449e01ea1,Uid:ca18dc824e049b290d4a590dc8fb8b26,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:10.473111 containerd[1717]: time="2025-07-10T00:26:10.473086638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-e449e01ea1,Uid:c69fc7eb6099ecd5b703ef533b1fa44b,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:10.531387 containerd[1717]: time="2025-07-10T00:26:10.531303779Z" level=info msg="connecting to shim 98ae2fa266424d4825388538108f6e917fdc6d83023c6d9ee78f6bd46802346e" address="unix:///run/containerd/s/61774ba085cd5594d55cafadebf58c07d45279f1013b0e14c486a52206e150c8" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:10.532292 containerd[1717]: time="2025-07-10T00:26:10.532258644Z" level=info msg="connecting to shim b6a0545b4be4cd659904e0b8314e43ef6483e49186dcbb048d0aff84f9161c94" address="unix:///run/containerd/s/9a822a5f8f2c53619dac8010d5d516b28d5c08015925bd219a1a8862d3ad5112" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:10.565172 containerd[1717]: time="2025-07-10T00:26:10.564922206Z" level=info msg="connecting to shim 60dd9bd7bf1b3261c7091294fe61e125b18f1ecff277243602abc71869fd1ee7" address="unix:///run/containerd/s/5a5b76fd8f2c2fad2ed546a90ee8bc1a90a4877488ad24fe3d527d39c3fe4e1e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:10.567473 systemd[1]: Started cri-containerd-98ae2fa266424d4825388538108f6e917fdc6d83023c6d9ee78f6bd46802346e.scope - libcontainer container 98ae2fa266424d4825388538108f6e917fdc6d83023c6d9ee78f6bd46802346e. Jul 10 00:26:10.572263 systemd[1]: Started cri-containerd-b6a0545b4be4cd659904e0b8314e43ef6483e49186dcbb048d0aff84f9161c94.scope - libcontainer container b6a0545b4be4cd659904e0b8314e43ef6483e49186dcbb048d0aff84f9161c94. Jul 10 00:26:10.591289 systemd[1]: Started cri-containerd-60dd9bd7bf1b3261c7091294fe61e125b18f1ecff277243602abc71869fd1ee7.scope - libcontainer container 60dd9bd7bf1b3261c7091294fe61e125b18f1ecff277243602abc71869fd1ee7. Jul 10 00:26:10.628851 kubelet[2766]: E0710 00:26:10.628774 2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-e449e01ea1?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="800ms" Jul 10 00:26:10.636993 containerd[1717]: time="2025-07-10T00:26:10.636965946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-e449e01ea1,Uid:ca18dc824e049b290d4a590dc8fb8b26,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a0545b4be4cd659904e0b8314e43ef6483e49186dcbb048d0aff84f9161c94\"" Jul 10 00:26:10.640988 containerd[1717]: time="2025-07-10T00:26:10.640839857Z" level=info msg="CreateContainer within sandbox \"b6a0545b4be4cd659904e0b8314e43ef6483e49186dcbb048d0aff84f9161c94\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:26:10.660609 containerd[1717]: time="2025-07-10T00:26:10.660401024Z" level=info msg="Container f225628a3b7033a5472fc05b1dea09ca616d10253659785c6b21dafc060581e6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:10.663322 containerd[1717]: time="2025-07-10T00:26:10.663304170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-e449e01ea1,Uid:154ba0481b935c8a5cf3251c832c01b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"98ae2fa266424d4825388538108f6e917fdc6d83023c6d9ee78f6bd46802346e\"" Jul 10 00:26:10.666052 containerd[1717]: time="2025-07-10T00:26:10.666033926Z" level=info msg="CreateContainer within sandbox \"98ae2fa266424d4825388538108f6e917fdc6d83023c6d9ee78f6bd46802346e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:26:10.667949 containerd[1717]: time="2025-07-10T00:26:10.667932423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-e449e01ea1,Uid:c69fc7eb6099ecd5b703ef533b1fa44b,Namespace:kube-system,Attempt:0,} returns sandbox id \"60dd9bd7bf1b3261c7091294fe61e125b18f1ecff277243602abc71869fd1ee7\"" Jul 10 00:26:10.669528 containerd[1717]: time="2025-07-10T00:26:10.669504460Z" level=info msg="CreateContainer within sandbox \"60dd9bd7bf1b3261c7091294fe61e125b18f1ecff277243602abc71869fd1ee7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:26:10.677072 containerd[1717]: time="2025-07-10T00:26:10.677016302Z" level=info msg="CreateContainer within sandbox \"b6a0545b4be4cd659904e0b8314e43ef6483e49186dcbb048d0aff84f9161c94\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f225628a3b7033a5472fc05b1dea09ca616d10253659785c6b21dafc060581e6\"" Jul 10 00:26:10.677570 containerd[1717]: time="2025-07-10T00:26:10.677541732Z" level=info msg="StartContainer for \"f225628a3b7033a5472fc05b1dea09ca616d10253659785c6b21dafc060581e6\"" Jul 10 00:26:10.678363 containerd[1717]: time="2025-07-10T00:26:10.678340351Z" level=info msg="connecting to shim f225628a3b7033a5472fc05b1dea09ca616d10253659785c6b21dafc060581e6" address="unix:///run/containerd/s/9a822a5f8f2c53619dac8010d5d516b28d5c08015925bd219a1a8862d3ad5112" protocol=ttrpc version=3 Jul 10 00:26:10.690480 containerd[1717]: time="2025-07-10T00:26:10.690459998Z" level=info msg="Container 0dda94c258a1ef43e9a99d57cb1290d6ef81cf0d6d865de91563b7a3c9ef4506: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:10.691456 systemd[1]: Started cri-containerd-f225628a3b7033a5472fc05b1dea09ca616d10253659785c6b21dafc060581e6.scope - libcontainer container f225628a3b7033a5472fc05b1dea09ca616d10253659785c6b21dafc060581e6. Jul 10 00:26:10.707475 containerd[1717]: time="2025-07-10T00:26:10.707355165Z" level=info msg="CreateContainer within sandbox \"60dd9bd7bf1b3261c7091294fe61e125b18f1ecff277243602abc71869fd1ee7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0dda94c258a1ef43e9a99d57cb1290d6ef81cf0d6d865de91563b7a3c9ef4506\"" Jul 10 00:26:10.707990 containerd[1717]: time="2025-07-10T00:26:10.707970829Z" level=info msg="StartContainer for \"0dda94c258a1ef43e9a99d57cb1290d6ef81cf0d6d865de91563b7a3c9ef4506\"" Jul 10 00:26:10.712031 containerd[1717]: time="2025-07-10T00:26:10.711997487Z" level=info msg="connecting to shim 0dda94c258a1ef43e9a99d57cb1290d6ef81cf0d6d865de91563b7a3c9ef4506" address="unix:///run/containerd/s/5a5b76fd8f2c2fad2ed546a90ee8bc1a90a4877488ad24fe3d527d39c3fe4e1e" protocol=ttrpc version=3 Jul 10 00:26:10.714177 containerd[1717]: time="2025-07-10T00:26:10.713464544Z" level=info msg="Container 258948c9c8dedd122c8b67c1989fe139d3de128f29d65492a0ecd3828782a269: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:10.732593 containerd[1717]: time="2025-07-10T00:26:10.732548495Z" level=info msg="CreateContainer within sandbox \"98ae2fa266424d4825388538108f6e917fdc6d83023c6d9ee78f6bd46802346e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"258948c9c8dedd122c8b67c1989fe139d3de128f29d65492a0ecd3828782a269\"" Jul 10 00:26:10.733072 containerd[1717]: time="2025-07-10T00:26:10.733015122Z" level=info msg="StartContainer for \"258948c9c8dedd122c8b67c1989fe139d3de128f29d65492a0ecd3828782a269\"" Jul 10 00:26:10.734033 containerd[1717]: time="2025-07-10T00:26:10.733970839Z" level=info msg="connecting to shim 258948c9c8dedd122c8b67c1989fe139d3de128f29d65492a0ecd3828782a269" address="unix:///run/containerd/s/61774ba085cd5594d55cafadebf58c07d45279f1013b0e14c486a52206e150c8" protocol=ttrpc version=3 Jul 10 00:26:10.735359 systemd[1]: Started cri-containerd-0dda94c258a1ef43e9a99d57cb1290d6ef81cf0d6d865de91563b7a3c9ef4506.scope - libcontainer container 0dda94c258a1ef43e9a99d57cb1290d6ef81cf0d6d865de91563b7a3c9ef4506. Jul 10 00:26:10.748734 containerd[1717]: time="2025-07-10T00:26:10.748709525Z" level=info msg="StartContainer for \"f225628a3b7033a5472fc05b1dea09ca616d10253659785c6b21dafc060581e6\" returns successfully" Jul 10 00:26:10.757814 systemd[1]: Started cri-containerd-258948c9c8dedd122c8b67c1989fe139d3de128f29d65492a0ecd3828782a269.scope - libcontainer container 258948c9c8dedd122c8b67c1989fe139d3de128f29d65492a0ecd3828782a269. Jul 10 00:26:10.799421 kubelet[2766]: I0710 00:26:10.799037 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.799751 kubelet[2766]: E0710 00:26:10.799650 2766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:10.813268 containerd[1717]: time="2025-07-10T00:26:10.813240347Z" level=info msg="StartContainer for \"0dda94c258a1ef43e9a99d57cb1290d6ef81cf0d6d865de91563b7a3c9ef4506\" returns successfully" Jul 10 00:26:10.860400 containerd[1717]: time="2025-07-10T00:26:10.860303156Z" level=info msg="StartContainer for \"258948c9c8dedd122c8b67c1989fe139d3de128f29d65492a0ecd3828782a269\" returns successfully" Jul 10 00:26:11.061596 kubelet[2766]: E0710 00:26:11.061397 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:11.064326 kubelet[2766]: E0710 00:26:11.064146 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:11.065830 kubelet[2766]: E0710 00:26:11.065818 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:11.602710 kubelet[2766]: I0710 00:26:11.602387 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.067670 kubelet[2766]: E0710 00:26:12.067642 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.068049 kubelet[2766]: E0710 00:26:12.067997 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.069411 kubelet[2766]: E0710 00:26:12.069391 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.247320 kubelet[2766]: E0710 00:26:12.247285 2766 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-e449e01ea1\" not found" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.300013 kubelet[2766]: I0710 00:26:12.299971 2766 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.317185 kubelet[2766]: I0710 00:26:12.317096 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.347352 kubelet[2766]: E0710 00:26:12.347205 2766 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4344.1.1-n-e449e01ea1.1850bc3024ca05d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-e449e01ea1,UID:ci-4344.1.1-n-e449e01ea1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-e449e01ea1,},FirstTimestamp:2025-07-10 00:26:10.008794579 +0000 UTC m=+0.229171089,LastTimestamp:2025-07-10 00:26:10.008794579 +0000 UTC m=+0.229171089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-e449e01ea1,}" Jul 10 00:26:12.373149 kubelet[2766]: E0710 00:26:12.373043 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.373149 kubelet[2766]: I0710 00:26:12.373063 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.374452 kubelet[2766]: E0710 00:26:12.374430 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-e449e01ea1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.374452 kubelet[2766]: I0710 00:26:12.374449 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:12.375660 kubelet[2766]: E0710 00:26:12.375640 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-e449e01ea1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:13.004557 kubelet[2766]: I0710 00:26:13.004520 2766 apiserver.go:52] "Watching apiserver" Jul 10 00:26:13.017839 kubelet[2766]: I0710 00:26:13.017814 2766 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:26:14.306009 systemd[1]: Reload requested from client PID 3039 ('systemctl') (unit session-9.scope)... Jul 10 00:26:14.306028 systemd[1]: Reloading... Jul 10 00:26:14.392190 zram_generator::config[3081]: No configuration found. Jul 10 00:26:14.482265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:26:14.585255 systemd[1]: Reloading finished in 278 ms. Jul 10 00:26:14.606126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:14.627446 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:26:14.627638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:14.627674 systemd[1]: kubelet.service: Consumed 531ms CPU time, 131M memory peak. Jul 10 00:26:14.629289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:15.267281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:15.272533 (kubelet)[3152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:26:15.309570 kubelet[3152]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:26:15.309570 kubelet[3152]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:26:15.309570 kubelet[3152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:26:15.309570 kubelet[3152]: I0710 00:26:15.309536 3152 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:26:15.315310 kubelet[3152]: I0710 00:26:15.315286 3152 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:26:15.315310 kubelet[3152]: I0710 00:26:15.315304 3152 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:26:15.315529 kubelet[3152]: I0710 00:26:15.315513 3152 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:26:15.316581 kubelet[3152]: I0710 00:26:15.316561 3152 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:26:15.320182 kubelet[3152]: I0710 00:26:15.320145 3152 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:26:15.323804 kubelet[3152]: I0710 00:26:15.323787 3152 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:26:15.326852 kubelet[3152]: I0710 00:26:15.326816 3152 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:26:15.327075 kubelet[3152]: I0710 00:26:15.326976 3152 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:26:15.327250 kubelet[3152]: I0710 00:26:15.327008 3152 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-e449e01ea1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:26:15.327346 kubelet[3152]: I0710 00:26:15.327261 3152 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:26:15.327346 kubelet[3152]: I0710 00:26:15.327270 3152 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:26:15.327346 kubelet[3152]: I0710 00:26:15.327337 3152 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:26:15.327487 kubelet[3152]: I0710 00:26:15.327447 3152 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:26:15.327487 kubelet[3152]: I0710 00:26:15.327463 3152 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:26:15.327487 kubelet[3152]: I0710 00:26:15.327482 3152 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:26:15.327544 kubelet[3152]: I0710 00:26:15.327491 3152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:26:15.333778 kubelet[3152]: I0710 00:26:15.333540 3152 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:26:15.334034 kubelet[3152]: I0710 00:26:15.334023 3152 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:26:15.334458 kubelet[3152]: I0710 00:26:15.334448 3152 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:26:15.334528 kubelet[3152]: I0710 00:26:15.334522 3152 server.go:1287] "Started kubelet" Jul 10 00:26:15.338108 kubelet[3152]: I0710 00:26:15.337949 3152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:26:15.345454 kubelet[3152]: I0710 00:26:15.345412 3152 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:26:15.346413 kubelet[3152]: I0710 00:26:15.346317 3152 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:26:15.347326 kubelet[3152]: I0710 00:26:15.347096 3152 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:26:15.347326 kubelet[3152]: I0710 00:26:15.347306 3152 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:26:15.347495 kubelet[3152]: I0710 00:26:15.347451 3152 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:26:15.351148 kubelet[3152]: E0710 00:26:15.348552 3152 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-e449e01ea1\" not found" Jul 10 00:26:15.351148 kubelet[3152]: I0710 00:26:15.348596 3152 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:26:15.351148 kubelet[3152]: I0710 00:26:15.348733 3152 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:26:15.351148 kubelet[3152]: I0710 00:26:15.348809 3152 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:26:15.352759 kubelet[3152]: I0710 00:26:15.352508 3152 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:26:15.353088 kubelet[3152]: I0710 00:26:15.352989 3152 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:26:15.354660 kubelet[3152]: E0710 00:26:15.354641 3152 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:26:15.356760 kubelet[3152]: I0710 00:26:15.356732 3152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:26:15.358025 kubelet[3152]: I0710 00:26:15.357947 3152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:26:15.358025 kubelet[3152]: I0710 00:26:15.357970 3152 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:26:15.358025 kubelet[3152]: I0710 00:26:15.357986 3152 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:26:15.358025 kubelet[3152]: I0710 00:26:15.357995 3152 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:26:15.358313 kubelet[3152]: E0710 00:26:15.358030 3152 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:26:15.369225 kubelet[3152]: I0710 00:26:15.368848 3152 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:26:15.393093 sudo[3180]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:26:15.393995 sudo[3180]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:26:15.430488 kubelet[3152]: I0710 00:26:15.430419 3152 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:26:15.430585 kubelet[3152]: I0710 00:26:15.430577 3152 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:26:15.430948 kubelet[3152]: I0710 00:26:15.430627 3152 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:26:15.430948 kubelet[3152]: I0710 00:26:15.430756 3152 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:26:15.430948 kubelet[3152]: I0710 00:26:15.430766 3152 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:26:15.430948 kubelet[3152]: I0710 00:26:15.430782 3152 policy_none.go:49] "None policy: Start" Jul 10 00:26:15.430948 kubelet[3152]: I0710 00:26:15.430790 3152 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:26:15.430948 kubelet[3152]: I0710 00:26:15.430798 3152 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:26:15.430948 kubelet[3152]: I0710 00:26:15.430902 3152 state_mem.go:75] "Updated machine memory state" Jul 10 00:26:15.434676 kubelet[3152]: I0710 00:26:15.434657 3152 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:26:15.436197 kubelet[3152]: I0710 00:26:15.435436 3152 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:26:15.436197 kubelet[3152]: I0710 00:26:15.435450 3152 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:26:15.436197 kubelet[3152]: I0710 00:26:15.435985 3152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:26:15.438975 kubelet[3152]: E0710 00:26:15.438960 3152 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:26:15.458431 kubelet[3152]: I0710 00:26:15.458410 3152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.459978 kubelet[3152]: I0710 00:26:15.459288 3152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.460681 kubelet[3152]: I0710 00:26:15.459378 3152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.467977 kubelet[3152]: W0710 00:26:15.467964 3152 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:26:15.469931 kubelet[3152]: W0710 00:26:15.469917 3152 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:26:15.470614 kubelet[3152]: W0710 00:26:15.470559 3152 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:26:15.538356 kubelet[3152]: I0710 00:26:15.538255 3152 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549388 kubelet[3152]: I0710 00:26:15.549368 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca18dc824e049b290d4a590dc8fb8b26-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-e449e01ea1\" (UID: \"ca18dc824e049b290d4a590dc8fb8b26\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549458 kubelet[3152]: I0710 00:26:15.549398 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549458 kubelet[3152]: I0710 00:26:15.549441 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/154ba0481b935c8a5cf3251c832c01b0-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-e449e01ea1\" (UID: \"154ba0481b935c8a5cf3251c832c01b0\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549506 kubelet[3152]: I0710 00:26:15.549458 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca18dc824e049b290d4a590dc8fb8b26-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-e449e01ea1\" (UID: \"ca18dc824e049b290d4a590dc8fb8b26\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549506 kubelet[3152]: I0710 00:26:15.549473 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca18dc824e049b290d4a590dc8fb8b26-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-e449e01ea1\" (UID: \"ca18dc824e049b290d4a590dc8fb8b26\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549545 kubelet[3152]: I0710 00:26:15.549509 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549545 kubelet[3152]: I0710 00:26:15.549525 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549545 kubelet[3152]: I0710 00:26:15.549542 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.549605 kubelet[3152]: I0710 00:26:15.549582 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c69fc7eb6099ecd5b703ef533b1fa44b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-e449e01ea1\" (UID: \"c69fc7eb6099ecd5b703ef533b1fa44b\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.550718 kubelet[3152]: I0710 00:26:15.550561 3152 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.550718 kubelet[3152]: I0710 00:26:15.550612 3152 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:15.916723 sudo[3180]: pam_unix(sudo:session): session closed for user root Jul 10 00:26:16.330416 kubelet[3152]: I0710 00:26:16.330362 3152 apiserver.go:52] "Watching apiserver" Jul 10 00:26:16.349120 kubelet[3152]: I0710 00:26:16.349079 3152 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:26:16.414730 kubelet[3152]: I0710 00:26:16.414332 3152 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:16.423565 kubelet[3152]: W0710 00:26:16.423545 3152 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:26:16.423778 kubelet[3152]: E0710 00:26:16.423605 3152 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-e449e01ea1\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-n-e449e01ea1" Jul 10 00:26:16.448316 kubelet[3152]: I0710 00:26:16.448205 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-e449e01ea1" podStartSLOduration=1.448190853 podStartE2EDuration="1.448190853s" podCreationTimestamp="2025-07-10 00:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:16.434313054 +0000 UTC m=+1.158222524" watchObservedRunningTime="2025-07-10 00:26:16.448190853 +0000 UTC m=+1.172100486" Jul 10 00:26:16.456061 kubelet[3152]: I0710 00:26:16.456005 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-e449e01ea1" podStartSLOduration=1.455991404 podStartE2EDuration="1.455991404s" podCreationTimestamp="2025-07-10 00:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:16.448189768 +0000 UTC m=+1.172099240" watchObservedRunningTime="2025-07-10 00:26:16.455991404 +0000 UTC m=+1.179900878" Jul 10 00:26:16.456311 kubelet[3152]: I0710 00:26:16.456287 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-e449e01ea1" podStartSLOduration=1.456278105 podStartE2EDuration="1.456278105s" podCreationTimestamp="2025-07-10 00:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:16.45617287 +0000 UTC m=+1.180082334" watchObservedRunningTime="2025-07-10 00:26:16.456278105 +0000 UTC m=+1.180187574" Jul 10 00:26:17.234857 sudo[2180]: pam_unix(sudo:session): session closed for user root Jul 10 00:26:17.335350 sshd[2179]: Connection closed by 10.200.16.10 port 44030 Jul 10 00:26:17.335868 sshd-session[2177]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:17.339359 systemd[1]: sshd@6-10.200.8.13:22-10.200.16.10:44030.service: Deactivated successfully. Jul 10 00:26:17.341100 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:26:17.341335 systemd[1]: session-9.scope: Consumed 3.154s CPU time, 270M memory peak. Jul 10 00:26:17.342565 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:26:17.344075 systemd-logind[1698]: Removed session 9. Jul 10 00:26:20.476707 kubelet[3152]: I0710 00:26:20.476675 3152 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:26:20.477198 kubelet[3152]: I0710 00:26:20.477112 3152 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:26:20.477239 containerd[1717]: time="2025-07-10T00:26:20.476965603Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:26:21.265557 systemd[1]: Created slice kubepods-besteffort-podcb5cdc4c_e85a_48e1_89d1_c65b3623a03a.slice - libcontainer container kubepods-besteffort-podcb5cdc4c_e85a_48e1_89d1_c65b3623a03a.slice. Jul 10 00:26:21.278022 systemd[1]: Created slice kubepods-burstable-pod6cf73113_592c_4607_94f2_66abe0c5ecee.slice - libcontainer container kubepods-burstable-pod6cf73113_592c_4607_94f2_66abe0c5ecee.slice. Jul 10 00:26:21.287645 kubelet[3152]: I0710 00:26:21.287616 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-bpf-maps\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.287645 kubelet[3152]: I0710 00:26:21.287647 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cni-path\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.287930 kubelet[3152]: I0710 00:26:21.287666 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-host-proc-sys-net\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.287930 kubelet[3152]: I0710 00:26:21.287681 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cf73113-592c-4607-94f2-66abe0c5ecee-hubble-tls\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.287930 kubelet[3152]: I0710 00:26:21.287696 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cf73113-592c-4607-94f2-66abe0c5ecee-clustermesh-secrets\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.287930 kubelet[3152]: I0710 00:26:21.287709 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb5cdc4c-e85a-48e1-89d1-c65b3623a03a-lib-modules\") pod \"kube-proxy-6lqmx\" (UID: \"cb5cdc4c-e85a-48e1-89d1-c65b3623a03a\") " pod="kube-system/kube-proxy-6lqmx" Jul 10 00:26:21.287930 kubelet[3152]: I0710 00:26:21.287725 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-run\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.287930 kubelet[3152]: I0710 00:26:21.287757 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-hostproc\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.289284 kubelet[3152]: I0710 00:26:21.287805 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-etc-cni-netd\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.289284 kubelet[3152]: I0710 00:26:21.287832 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq98n\" (UniqueName: \"kubernetes.io/projected/6cf73113-592c-4607-94f2-66abe0c5ecee-kube-api-access-nq98n\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.289284 kubelet[3152]: I0710 00:26:21.287860 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-config-path\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.289284 kubelet[3152]: I0710 00:26:21.287881 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb5cdc4c-e85a-48e1-89d1-c65b3623a03a-xtables-lock\") pod \"kube-proxy-6lqmx\" (UID: \"cb5cdc4c-e85a-48e1-89d1-c65b3623a03a\") " pod="kube-system/kube-proxy-6lqmx" Jul 10 00:26:21.289284 kubelet[3152]: I0710 00:26:21.287928 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ckrc\" (UniqueName: \"kubernetes.io/projected/cb5cdc4c-e85a-48e1-89d1-c65b3623a03a-kube-api-access-4ckrc\") pod \"kube-proxy-6lqmx\" (UID: \"cb5cdc4c-e85a-48e1-89d1-c65b3623a03a\") " pod="kube-system/kube-proxy-6lqmx" Jul 10 00:26:21.289392 kubelet[3152]: I0710 00:26:21.287952 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-cgroup\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.289392 kubelet[3152]: I0710 00:26:21.287979 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-lib-modules\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.289392 kubelet[3152]: I0710 00:26:21.288000 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-xtables-lock\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.289625 kubelet[3152]: I0710 00:26:21.289509 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb5cdc4c-e85a-48e1-89d1-c65b3623a03a-kube-proxy\") pod \"kube-proxy-6lqmx\" (UID: \"cb5cdc4c-e85a-48e1-89d1-c65b3623a03a\") " pod="kube-system/kube-proxy-6lqmx" Jul 10 00:26:21.289625 kubelet[3152]: I0710 00:26:21.289566 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-host-proc-sys-kernel\") pod \"cilium-4w487\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " pod="kube-system/cilium-4w487" Jul 10 00:26:21.575571 containerd[1717]: time="2025-07-10T00:26:21.575251729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6lqmx,Uid:cb5cdc4c-e85a-48e1-89d1-c65b3623a03a,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:21.583630 containerd[1717]: time="2025-07-10T00:26:21.582563260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4w487,Uid:6cf73113-592c-4607-94f2-66abe0c5ecee,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:21.592120 systemd[1]: Created slice kubepods-besteffort-pod75e91e55_fa7a_4496_a07c_ad28726f943d.slice - libcontainer container kubepods-besteffort-pod75e91e55_fa7a_4496_a07c_ad28726f943d.slice. Jul 10 00:26:21.668403 containerd[1717]: time="2025-07-10T00:26:21.668348539Z" level=info msg="connecting to shim 7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec" address="unix:///run/containerd/s/0bacbd98be384d0a1d78fcc61645febdd799ab45c5e12328e9ded2411eefb602" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:21.668490 containerd[1717]: time="2025-07-10T00:26:21.668352846Z" level=info msg="connecting to shim cf3dcd55b57611cd9bdd3edf92234f88b8f9a8b0fbfa655d6de0f2a936077c6c" address="unix:///run/containerd/s/82ddc30daadcccc25c2df67221876d38dadea12b8f75a5c618d6a935d32018d6" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:21.689293 systemd[1]: Started cri-containerd-7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec.scope - libcontainer container 7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec. Jul 10 00:26:21.690469 systemd[1]: Started cri-containerd-cf3dcd55b57611cd9bdd3edf92234f88b8f9a8b0fbfa655d6de0f2a936077c6c.scope - libcontainer container cf3dcd55b57611cd9bdd3edf92234f88b8f9a8b0fbfa655d6de0f2a936077c6c. Jul 10 00:26:21.693603 kubelet[3152]: I0710 00:26:21.693563 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4mnj\" (UniqueName: \"kubernetes.io/projected/75e91e55-fa7a-4496-a07c-ad28726f943d-kube-api-access-j4mnj\") pod \"cilium-operator-6c4d7847fc-6pxmc\" (UID: \"75e91e55-fa7a-4496-a07c-ad28726f943d\") " pod="kube-system/cilium-operator-6c4d7847fc-6pxmc" Jul 10 00:26:21.693884 kubelet[3152]: I0710 00:26:21.693613 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75e91e55-fa7a-4496-a07c-ad28726f943d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6pxmc\" (UID: \"75e91e55-fa7a-4496-a07c-ad28726f943d\") " pod="kube-system/cilium-operator-6c4d7847fc-6pxmc" Jul 10 00:26:21.720066 containerd[1717]: time="2025-07-10T00:26:21.719951961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4w487,Uid:6cf73113-592c-4607-94f2-66abe0c5ecee,Namespace:kube-system,Attempt:0,} returns sandbox id \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\"" Jul 10 00:26:21.721947 containerd[1717]: time="2025-07-10T00:26:21.721904298Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:26:21.722784 containerd[1717]: time="2025-07-10T00:26:21.722718123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6lqmx,Uid:cb5cdc4c-e85a-48e1-89d1-c65b3623a03a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf3dcd55b57611cd9bdd3edf92234f88b8f9a8b0fbfa655d6de0f2a936077c6c\"" Jul 10 00:26:21.724665 containerd[1717]: time="2025-07-10T00:26:21.724616957Z" level=info msg="CreateContainer within sandbox \"cf3dcd55b57611cd9bdd3edf92234f88b8f9a8b0fbfa655d6de0f2a936077c6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:26:21.743659 containerd[1717]: time="2025-07-10T00:26:21.743635415Z" level=info msg="Container 53a48b233009b23a9e1db6e1b648fda0e8df0919995919bcca6d7e2a11da0950: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:21.757103 containerd[1717]: time="2025-07-10T00:26:21.757077067Z" level=info msg="CreateContainer within sandbox \"cf3dcd55b57611cd9bdd3edf92234f88b8f9a8b0fbfa655d6de0f2a936077c6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"53a48b233009b23a9e1db6e1b648fda0e8df0919995919bcca6d7e2a11da0950\"" Jul 10 00:26:21.758190 containerd[1717]: time="2025-07-10T00:26:21.757492197Z" level=info msg="StartContainer for \"53a48b233009b23a9e1db6e1b648fda0e8df0919995919bcca6d7e2a11da0950\"" Jul 10 00:26:21.758766 containerd[1717]: time="2025-07-10T00:26:21.758740487Z" level=info msg="connecting to shim 53a48b233009b23a9e1db6e1b648fda0e8df0919995919bcca6d7e2a11da0950" address="unix:///run/containerd/s/82ddc30daadcccc25c2df67221876d38dadea12b8f75a5c618d6a935d32018d6" protocol=ttrpc version=3 Jul 10 00:26:21.770286 systemd[1]: Started cri-containerd-53a48b233009b23a9e1db6e1b648fda0e8df0919995919bcca6d7e2a11da0950.scope - libcontainer container 53a48b233009b23a9e1db6e1b648fda0e8df0919995919bcca6d7e2a11da0950. Jul 10 00:26:21.799755 containerd[1717]: time="2025-07-10T00:26:21.799682264Z" level=info msg="StartContainer for \"53a48b233009b23a9e1db6e1b648fda0e8df0919995919bcca6d7e2a11da0950\" returns successfully" Jul 10 00:26:21.898864 containerd[1717]: time="2025-07-10T00:26:21.898766662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6pxmc,Uid:75e91e55-fa7a-4496-a07c-ad28726f943d,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:21.930273 containerd[1717]: time="2025-07-10T00:26:21.930246434Z" level=info msg="connecting to shim c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef" address="unix:///run/containerd/s/4d1b4494001917bf6b02f9732a71b520012190c9a5ac52d81bec2a0e1b95f250" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:21.949314 systemd[1]: Started cri-containerd-c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef.scope - libcontainer container c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef. Jul 10 00:26:21.983368 containerd[1717]: time="2025-07-10T00:26:21.983347367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6pxmc,Uid:75e91e55-fa7a-4496-a07c-ad28726f943d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\"" Jul 10 00:26:22.441967 kubelet[3152]: I0710 00:26:22.441920 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6lqmx" podStartSLOduration=1.4419031979999999 podStartE2EDuration="1.441903198s" podCreationTimestamp="2025-07-10 00:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:22.441717768 +0000 UTC m=+7.165627237" watchObservedRunningTime="2025-07-10 00:26:22.441903198 +0000 UTC m=+7.165812670" Jul 10 00:26:30.722292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248801826.mount: Deactivated successfully. Jul 10 00:26:34.823408 containerd[1717]: time="2025-07-10T00:26:34.823353156Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:34.825605 containerd[1717]: time="2025-07-10T00:26:34.825574278Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 00:26:34.828420 containerd[1717]: time="2025-07-10T00:26:34.828379005Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:34.829353 containerd[1717]: time="2025-07-10T00:26:34.829256115Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.107321417s" Jul 10 00:26:34.829353 containerd[1717]: time="2025-07-10T00:26:34.829289434Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:26:34.830446 containerd[1717]: time="2025-07-10T00:26:34.830202654Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:26:34.831696 containerd[1717]: time="2025-07-10T00:26:34.831670936Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:26:34.856331 containerd[1717]: time="2025-07-10T00:26:34.856301297Z" level=info msg="Container 2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:34.868888 containerd[1717]: time="2025-07-10T00:26:34.868861108Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\"" Jul 10 00:26:34.869277 containerd[1717]: time="2025-07-10T00:26:34.869232859Z" level=info msg="StartContainer for \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\"" Jul 10 00:26:34.870259 containerd[1717]: time="2025-07-10T00:26:34.870209452Z" level=info msg="connecting to shim 2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd" address="unix:///run/containerd/s/0bacbd98be384d0a1d78fcc61645febdd799ab45c5e12328e9ded2411eefb602" protocol=ttrpc version=3 Jul 10 00:26:34.889301 systemd[1]: Started cri-containerd-2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd.scope - libcontainer container 2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd. Jul 10 00:26:34.913692 containerd[1717]: time="2025-07-10T00:26:34.913620006Z" level=info msg="StartContainer for \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\" returns successfully" Jul 10 00:26:34.920387 systemd[1]: cri-containerd-2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd.scope: Deactivated successfully. Jul 10 00:26:34.921704 containerd[1717]: time="2025-07-10T00:26:34.921676901Z" level=info msg="received exit event container_id:\"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\" id:\"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\" pid:3568 exited_at:{seconds:1752107194 nanos:920565931}" Jul 10 00:26:34.922040 containerd[1717]: time="2025-07-10T00:26:34.921892149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\" id:\"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\" pid:3568 exited_at:{seconds:1752107194 nanos:920565931}" Jul 10 00:26:35.853815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd-rootfs.mount: Deactivated successfully. Jul 10 00:26:40.224552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261289679.mount: Deactivated successfully. Jul 10 00:26:40.463580 containerd[1717]: time="2025-07-10T00:26:40.463509986Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:26:40.490864 containerd[1717]: time="2025-07-10T00:26:40.490258685Z" level=info msg="Container 91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:40.494302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004148278.mount: Deactivated successfully. Jul 10 00:26:40.504288 containerd[1717]: time="2025-07-10T00:26:40.504251415Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\"" Jul 10 00:26:40.505205 containerd[1717]: time="2025-07-10T00:26:40.505183470Z" level=info msg="StartContainer for \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\"" Jul 10 00:26:40.507050 containerd[1717]: time="2025-07-10T00:26:40.507021177Z" level=info msg="connecting to shim 91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079" address="unix:///run/containerd/s/0bacbd98be384d0a1d78fcc61645febdd799ab45c5e12328e9ded2411eefb602" protocol=ttrpc version=3 Jul 10 00:26:40.531410 systemd[1]: Started cri-containerd-91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079.scope - libcontainer container 91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079. Jul 10 00:26:40.571804 containerd[1717]: time="2025-07-10T00:26:40.571748641Z" level=info msg="StartContainer for \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\" returns successfully" Jul 10 00:26:40.582223 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:26:40.582462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:26:40.582781 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:26:40.584511 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:26:40.587560 systemd[1]: cri-containerd-91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079.scope: Deactivated successfully. Jul 10 00:26:40.588952 containerd[1717]: time="2025-07-10T00:26:40.588894347Z" level=info msg="received exit event container_id:\"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\" id:\"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\" pid:3627 exited_at:{seconds:1752107200 nanos:588635199}" Jul 10 00:26:40.589097 containerd[1717]: time="2025-07-10T00:26:40.589078667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\" id:\"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\" pid:3627 exited_at:{seconds:1752107200 nanos:588635199}" Jul 10 00:26:40.608179 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:26:40.940452 containerd[1717]: time="2025-07-10T00:26:40.940418559Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:40.942880 containerd[1717]: time="2025-07-10T00:26:40.942845593Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 00:26:40.945298 containerd[1717]: time="2025-07-10T00:26:40.945248124Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:40.946186 containerd[1717]: time="2025-07-10T00:26:40.946079073Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.115847662s" Jul 10 00:26:40.946186 containerd[1717]: time="2025-07-10T00:26:40.946109781Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:26:40.949216 containerd[1717]: time="2025-07-10T00:26:40.949192721Z" level=info msg="CreateContainer within sandbox \"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:26:40.962190 containerd[1717]: time="2025-07-10T00:26:40.962165551Z" level=info msg="Container 7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:40.974298 containerd[1717]: time="2025-07-10T00:26:40.974273333Z" level=info msg="CreateContainer within sandbox \"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\"" Jul 10 00:26:40.975505 containerd[1717]: time="2025-07-10T00:26:40.974720538Z" level=info msg="StartContainer for \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\"" Jul 10 00:26:40.975505 containerd[1717]: time="2025-07-10T00:26:40.975453406Z" level=info msg="connecting to shim 7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789" address="unix:///run/containerd/s/4d1b4494001917bf6b02f9732a71b520012190c9a5ac52d81bec2a0e1b95f250" protocol=ttrpc version=3 Jul 10 00:26:40.995347 systemd[1]: Started cri-containerd-7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789.scope - libcontainer container 7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789. Jul 10 00:26:41.019838 containerd[1717]: time="2025-07-10T00:26:41.019799809Z" level=info msg="StartContainer for \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" returns successfully" Jul 10 00:26:41.218021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079-rootfs.mount: Deactivated successfully. Jul 10 00:26:41.469095 containerd[1717]: time="2025-07-10T00:26:41.468997209Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:26:41.497390 containerd[1717]: time="2025-07-10T00:26:41.497262401Z" level=info msg="Container f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:41.517194 containerd[1717]: time="2025-07-10T00:26:41.515098677Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\"" Jul 10 00:26:41.519290 containerd[1717]: time="2025-07-10T00:26:41.519265581Z" level=info msg="StartContainer for \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\"" Jul 10 00:26:41.520930 containerd[1717]: time="2025-07-10T00:26:41.520906345Z" level=info msg="connecting to shim f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207" address="unix:///run/containerd/s/0bacbd98be384d0a1d78fcc61645febdd799ab45c5e12328e9ded2411eefb602" protocol=ttrpc version=3 Jul 10 00:26:41.553583 systemd[1]: Started cri-containerd-f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207.scope - libcontainer container f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207. Jul 10 00:26:41.567065 kubelet[3152]: I0710 00:26:41.566952 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6pxmc" podStartSLOduration=1.604687722 podStartE2EDuration="20.566935326s" podCreationTimestamp="2025-07-10 00:26:21 +0000 UTC" firstStartedPulling="2025-07-10 00:26:21.984413951 +0000 UTC m=+6.708323422" lastFinishedPulling="2025-07-10 00:26:40.946661563 +0000 UTC m=+25.670571026" observedRunningTime="2025-07-10 00:26:41.511970461 +0000 UTC m=+26.235879934" watchObservedRunningTime="2025-07-10 00:26:41.566935326 +0000 UTC m=+26.290844787" Jul 10 00:26:41.648290 containerd[1717]: time="2025-07-10T00:26:41.648258742Z" level=info msg="StartContainer for \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\" returns successfully" Jul 10 00:26:41.667616 systemd[1]: cri-containerd-f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207.scope: Deactivated successfully. Jul 10 00:26:41.668600 containerd[1717]: time="2025-07-10T00:26:41.668566078Z" level=info msg="received exit event container_id:\"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\" id:\"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\" pid:3709 exited_at:{seconds:1752107201 nanos:668178939}" Jul 10 00:26:41.668884 containerd[1717]: time="2025-07-10T00:26:41.668859916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\" id:\"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\" pid:3709 exited_at:{seconds:1752107201 nanos:668178939}" Jul 10 00:26:41.701854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207-rootfs.mount: Deactivated successfully. Jul 10 00:26:42.479455 containerd[1717]: time="2025-07-10T00:26:42.476299716Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:26:42.507859 containerd[1717]: time="2025-07-10T00:26:42.507831651Z" level=info msg="Container 6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:42.520890 containerd[1717]: time="2025-07-10T00:26:42.520863423Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\"" Jul 10 00:26:42.521988 containerd[1717]: time="2025-07-10T00:26:42.521215269Z" level=info msg="StartContainer for \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\"" Jul 10 00:26:42.521988 containerd[1717]: time="2025-07-10T00:26:42.521913812Z" level=info msg="connecting to shim 6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc" address="unix:///run/containerd/s/0bacbd98be384d0a1d78fcc61645febdd799ab45c5e12328e9ded2411eefb602" protocol=ttrpc version=3 Jul 10 00:26:42.543291 systemd[1]: Started cri-containerd-6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc.scope - libcontainer container 6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc. Jul 10 00:26:42.559841 systemd[1]: cri-containerd-6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc.scope: Deactivated successfully. Jul 10 00:26:42.560895 containerd[1717]: time="2025-07-10T00:26:42.560851542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\" id:\"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\" pid:3750 exited_at:{seconds:1752107202 nanos:560364328}" Jul 10 00:26:42.563873 containerd[1717]: time="2025-07-10T00:26:42.563626849Z" level=info msg="received exit event container_id:\"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\" id:\"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\" pid:3750 exited_at:{seconds:1752107202 nanos:560364328}" Jul 10 00:26:42.569251 containerd[1717]: time="2025-07-10T00:26:42.569229208Z" level=info msg="StartContainer for \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\" returns successfully" Jul 10 00:26:42.577759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc-rootfs.mount: Deactivated successfully. Jul 10 00:26:43.479361 containerd[1717]: time="2025-07-10T00:26:43.478728681Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:26:43.502321 containerd[1717]: time="2025-07-10T00:26:43.502292609Z" level=info msg="Container 105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:43.514477 containerd[1717]: time="2025-07-10T00:26:43.514448784Z" level=info msg="CreateContainer within sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\"" Jul 10 00:26:43.515177 containerd[1717]: time="2025-07-10T00:26:43.514859391Z" level=info msg="StartContainer for \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\"" Jul 10 00:26:43.515771 containerd[1717]: time="2025-07-10T00:26:43.515747355Z" level=info msg="connecting to shim 105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b" address="unix:///run/containerd/s/0bacbd98be384d0a1d78fcc61645febdd799ab45c5e12328e9ded2411eefb602" protocol=ttrpc version=3 Jul 10 00:26:43.534340 systemd[1]: Started cri-containerd-105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b.scope - libcontainer container 105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b. Jul 10 00:26:43.562282 containerd[1717]: time="2025-07-10T00:26:43.562257859Z" level=info msg="StartContainer for \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" returns successfully" Jul 10 00:26:43.614003 containerd[1717]: time="2025-07-10T00:26:43.613953026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" id:\"94746b253395d7a4f79dcbd5c08ee19f9bc5475d09461f43d960a039306c4c72\" pid:3818 exited_at:{seconds:1752107203 nanos:613663437}" Jul 10 00:26:43.658313 kubelet[3152]: I0710 00:26:43.658285 3152 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:26:43.693958 systemd[1]: Created slice kubepods-burstable-pod077cea8d_5807_4750_b83c_ffedfd662f43.slice - libcontainer container kubepods-burstable-pod077cea8d_5807_4750_b83c_ffedfd662f43.slice. Jul 10 00:26:43.706033 systemd[1]: Created slice kubepods-burstable-poddd9e9e78_17c4_4e6f_9941_56b5520a9777.slice - libcontainer container kubepods-burstable-poddd9e9e78_17c4_4e6f_9941_56b5520a9777.slice. Jul 10 00:26:43.735672 kubelet[3152]: I0710 00:26:43.735357 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2chv8\" (UniqueName: \"kubernetes.io/projected/077cea8d-5807-4750-b83c-ffedfd662f43-kube-api-access-2chv8\") pod \"coredns-668d6bf9bc-gsq7g\" (UID: \"077cea8d-5807-4750-b83c-ffedfd662f43\") " pod="kube-system/coredns-668d6bf9bc-gsq7g" Jul 10 00:26:43.735802 kubelet[3152]: I0710 00:26:43.735790 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077cea8d-5807-4750-b83c-ffedfd662f43-config-volume\") pod \"coredns-668d6bf9bc-gsq7g\" (UID: \"077cea8d-5807-4750-b83c-ffedfd662f43\") " pod="kube-system/coredns-668d6bf9bc-gsq7g" Jul 10 00:26:43.735881 kubelet[3152]: I0710 00:26:43.735872 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks62w\" (UniqueName: \"kubernetes.io/projected/dd9e9e78-17c4-4e6f-9941-56b5520a9777-kube-api-access-ks62w\") pod \"coredns-668d6bf9bc-lbf7t\" (UID: \"dd9e9e78-17c4-4e6f-9941-56b5520a9777\") " pod="kube-system/coredns-668d6bf9bc-lbf7t" Jul 10 00:26:43.736037 kubelet[3152]: I0710 00:26:43.736027 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd9e9e78-17c4-4e6f-9941-56b5520a9777-config-volume\") pod \"coredns-668d6bf9bc-lbf7t\" (UID: \"dd9e9e78-17c4-4e6f-9941-56b5520a9777\") " pod="kube-system/coredns-668d6bf9bc-lbf7t" Jul 10 00:26:44.001882 containerd[1717]: time="2025-07-10T00:26:44.001783962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gsq7g,Uid:077cea8d-5807-4750-b83c-ffedfd662f43,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:44.010746 containerd[1717]: time="2025-07-10T00:26:44.010713209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lbf7t,Uid:dd9e9e78-17c4-4e6f-9941-56b5520a9777,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:44.500174 kubelet[3152]: I0710 00:26:44.499833 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4w487" podStartSLOduration=10.391175699 podStartE2EDuration="23.499817466s" podCreationTimestamp="2025-07-10 00:26:21 +0000 UTC" firstStartedPulling="2025-07-10 00:26:21.721360229 +0000 UTC m=+6.445269700" lastFinishedPulling="2025-07-10 00:26:34.830001994 +0000 UTC m=+19.553911467" observedRunningTime="2025-07-10 00:26:44.49839412 +0000 UTC m=+29.222303596" watchObservedRunningTime="2025-07-10 00:26:44.499817466 +0000 UTC m=+29.223726967" Jul 10 00:26:45.587698 systemd-networkd[1359]: cilium_host: Link UP Jul 10 00:26:45.589470 systemd-networkd[1359]: cilium_net: Link UP Jul 10 00:26:45.589605 systemd-networkd[1359]: cilium_host: Gained carrier Jul 10 00:26:45.589716 systemd-networkd[1359]: cilium_net: Gained carrier Jul 10 00:26:45.737512 systemd-networkd[1359]: cilium_vxlan: Link UP Jul 10 00:26:45.737565 systemd-networkd[1359]: cilium_vxlan: Gained carrier Jul 10 00:26:45.802261 systemd-networkd[1359]: cilium_host: Gained IPv6LL Jul 10 00:26:45.926190 kernel: NET: Registered PF_ALG protocol family Jul 10 00:26:46.474653 systemd-networkd[1359]: lxc_health: Link UP Jul 10 00:26:46.480359 systemd-networkd[1359]: lxc_health: Gained carrier Jul 10 00:26:46.586250 systemd-networkd[1359]: cilium_net: Gained IPv6LL Jul 10 00:26:47.029281 systemd-networkd[1359]: lxc83c9e36fafdf: Link UP Jul 10 00:26:47.036584 kernel: eth0: renamed from tmp69d9d Jul 10 00:26:47.040280 systemd-networkd[1359]: lxc83c9e36fafdf: Gained carrier Jul 10 00:26:47.063177 kernel: eth0: renamed from tmp02778 Jul 10 00:26:47.064418 systemd-networkd[1359]: lxcb9e80ff1c338: Link UP Jul 10 00:26:47.066475 systemd-networkd[1359]: lxcb9e80ff1c338: Gained carrier Jul 10 00:26:47.354282 systemd-networkd[1359]: cilium_vxlan: Gained IPv6LL Jul 10 00:26:48.122362 systemd-networkd[1359]: lxc_health: Gained IPv6LL Jul 10 00:26:48.762349 systemd-networkd[1359]: lxc83c9e36fafdf: Gained IPv6LL Jul 10 00:26:48.890395 systemd-networkd[1359]: lxcb9e80ff1c338: Gained IPv6LL Jul 10 00:26:49.838625 containerd[1717]: time="2025-07-10T00:26:49.838556977Z" level=info msg="connecting to shim 69d9ddde7faa7304aa2d8ae74a4e357036718c42df47342fa635c474d9327f9f" address="unix:///run/containerd/s/e0f94b30d2da78b8d2baf22a965a03d7a997dea755fdc22d292817a7d007fc37" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:49.865178 containerd[1717]: time="2025-07-10T00:26:49.864258245Z" level=info msg="connecting to shim 02778518029e46f8cc54526234b868196ec8885388d33ace05225efe3e796f73" address="unix:///run/containerd/s/fd6fc4721f1d44bb4a314292568a27fb32e13ff82a09f52674eb1c4bd7bd8c8e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:49.885332 systemd[1]: Started cri-containerd-69d9ddde7faa7304aa2d8ae74a4e357036718c42df47342fa635c474d9327f9f.scope - libcontainer container 69d9ddde7faa7304aa2d8ae74a4e357036718c42df47342fa635c474d9327f9f. Jul 10 00:26:49.909267 systemd[1]: Started cri-containerd-02778518029e46f8cc54526234b868196ec8885388d33ace05225efe3e796f73.scope - libcontainer container 02778518029e46f8cc54526234b868196ec8885388d33ace05225efe3e796f73. Jul 10 00:26:49.958450 containerd[1717]: time="2025-07-10T00:26:49.958422586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gsq7g,Uid:077cea8d-5807-4750-b83c-ffedfd662f43,Namespace:kube-system,Attempt:0,} returns sandbox id \"69d9ddde7faa7304aa2d8ae74a4e357036718c42df47342fa635c474d9327f9f\"" Jul 10 00:26:49.961213 containerd[1717]: time="2025-07-10T00:26:49.960635014Z" level=info msg="CreateContainer within sandbox \"69d9ddde7faa7304aa2d8ae74a4e357036718c42df47342fa635c474d9327f9f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:26:49.968113 containerd[1717]: time="2025-07-10T00:26:49.968094336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lbf7t,Uid:dd9e9e78-17c4-4e6f-9941-56b5520a9777,Namespace:kube-system,Attempt:0,} returns sandbox id \"02778518029e46f8cc54526234b868196ec8885388d33ace05225efe3e796f73\"" Jul 10 00:26:49.970296 containerd[1717]: time="2025-07-10T00:26:49.970276655Z" level=info msg="CreateContainer within sandbox \"02778518029e46f8cc54526234b868196ec8885388d33ace05225efe3e796f73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:26:49.978725 containerd[1717]: time="2025-07-10T00:26:49.978701424Z" level=info msg="Container 6bdda042b1b78314d3aea0fc62fb7d5d69ce341d727fdee8a19adcc80b99db51: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:49.997311 containerd[1717]: time="2025-07-10T00:26:49.997286610Z" level=info msg="Container 5c6572d10e8e2f5e1cb51f5ab36ed55fc192462fb87f5a26e09c0bffb4e19b90: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:50.003806 containerd[1717]: time="2025-07-10T00:26:50.003781874Z" level=info msg="CreateContainer within sandbox \"69d9ddde7faa7304aa2d8ae74a4e357036718c42df47342fa635c474d9327f9f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bdda042b1b78314d3aea0fc62fb7d5d69ce341d727fdee8a19adcc80b99db51\"" Jul 10 00:26:50.004227 containerd[1717]: time="2025-07-10T00:26:50.004150156Z" level=info msg="StartContainer for \"6bdda042b1b78314d3aea0fc62fb7d5d69ce341d727fdee8a19adcc80b99db51\"" Jul 10 00:26:50.005074 containerd[1717]: time="2025-07-10T00:26:50.005024367Z" level=info msg="connecting to shim 6bdda042b1b78314d3aea0fc62fb7d5d69ce341d727fdee8a19adcc80b99db51" address="unix:///run/containerd/s/e0f94b30d2da78b8d2baf22a965a03d7a997dea755fdc22d292817a7d007fc37" protocol=ttrpc version=3 Jul 10 00:26:50.011098 containerd[1717]: time="2025-07-10T00:26:50.011072202Z" level=info msg="CreateContainer within sandbox \"02778518029e46f8cc54526234b868196ec8885388d33ace05225efe3e796f73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c6572d10e8e2f5e1cb51f5ab36ed55fc192462fb87f5a26e09c0bffb4e19b90\"" Jul 10 00:26:50.012874 containerd[1717]: time="2025-07-10T00:26:50.012849934Z" level=info msg="StartContainer for \"5c6572d10e8e2f5e1cb51f5ab36ed55fc192462fb87f5a26e09c0bffb4e19b90\"" Jul 10 00:26:50.018261 containerd[1717]: time="2025-07-10T00:26:50.017392362Z" level=info msg="connecting to shim 5c6572d10e8e2f5e1cb51f5ab36ed55fc192462fb87f5a26e09c0bffb4e19b90" address="unix:///run/containerd/s/fd6fc4721f1d44bb4a314292568a27fb32e13ff82a09f52674eb1c4bd7bd8c8e" protocol=ttrpc version=3 Jul 10 00:26:50.029339 systemd[1]: Started cri-containerd-6bdda042b1b78314d3aea0fc62fb7d5d69ce341d727fdee8a19adcc80b99db51.scope - libcontainer container 6bdda042b1b78314d3aea0fc62fb7d5d69ce341d727fdee8a19adcc80b99db51. Jul 10 00:26:50.037326 systemd[1]: Started cri-containerd-5c6572d10e8e2f5e1cb51f5ab36ed55fc192462fb87f5a26e09c0bffb4e19b90.scope - libcontainer container 5c6572d10e8e2f5e1cb51f5ab36ed55fc192462fb87f5a26e09c0bffb4e19b90. Jul 10 00:26:50.066590 containerd[1717]: time="2025-07-10T00:26:50.066244960Z" level=info msg="StartContainer for \"6bdda042b1b78314d3aea0fc62fb7d5d69ce341d727fdee8a19adcc80b99db51\" returns successfully" Jul 10 00:26:50.077660 containerd[1717]: time="2025-07-10T00:26:50.077637161Z" level=info msg="StartContainer for \"5c6572d10e8e2f5e1cb51f5ab36ed55fc192462fb87f5a26e09c0bffb4e19b90\" returns successfully" Jul 10 00:26:50.512338 kubelet[3152]: I0710 00:26:50.511674 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lbf7t" podStartSLOduration=29.51165722 podStartE2EDuration="29.51165722s" podCreationTimestamp="2025-07-10 00:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:50.511417495 +0000 UTC m=+35.235326966" watchObservedRunningTime="2025-07-10 00:26:50.51165722 +0000 UTC m=+35.235566691" Jul 10 00:26:50.529949 kubelet[3152]: I0710 00:26:50.529520 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gsq7g" podStartSLOduration=29.529502675 podStartE2EDuration="29.529502675s" podCreationTimestamp="2025-07-10 00:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:50.528469493 +0000 UTC m=+35.252378963" watchObservedRunningTime="2025-07-10 00:26:50.529502675 +0000 UTC m=+35.253412145" Jul 10 00:27:38.482917 update_engine[1699]: I20250710 00:27:38.482844 1699 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 10 00:27:38.482917 update_engine[1699]: I20250710 00:27:38.482911 1699 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 10 00:27:38.483480 update_engine[1699]: I20250710 00:27:38.483093 1699 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 10 00:27:38.483631 update_engine[1699]: I20250710 00:27:38.483561 1699 omaha_request_params.cc:62] Current group set to beta Jul 10 00:27:38.483788 update_engine[1699]: I20250710 00:27:38.483770 1699 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 10 00:27:38.484418 update_engine[1699]: I20250710 00:27:38.483867 1699 update_attempter.cc:643] Scheduling an action processor start. Jul 10 00:27:38.484418 update_engine[1699]: I20250710 00:27:38.483903 1699 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 00:27:38.484418 update_engine[1699]: I20250710 00:27:38.483954 1699 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 10 00:27:38.484418 update_engine[1699]: I20250710 00:27:38.484030 1699 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 00:27:38.484418 update_engine[1699]: I20250710 00:27:38.484037 1699 omaha_request_action.cc:272] Request: Jul 10 00:27:38.484418 update_engine[1699]: Jul 10 00:27:38.484418 update_engine[1699]: Jul 10 00:27:38.484418 update_engine[1699]: Jul 10 00:27:38.484418 update_engine[1699]: Jul 10 00:27:38.484418 update_engine[1699]: Jul 10 00:27:38.484418 update_engine[1699]: Jul 10 00:27:38.484418 update_engine[1699]: Jul 10 00:27:38.484418 update_engine[1699]: Jul 10 00:27:38.484418 update_engine[1699]: I20250710 00:27:38.484043 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:27:38.485021 locksmithd[1776]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 10 00:27:38.485470 update_engine[1699]: I20250710 00:27:38.485445 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:27:38.485778 update_engine[1699]: I20250710 00:27:38.485756 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:27:38.525767 update_engine[1699]: E20250710 00:27:38.525729 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:27:38.525859 update_engine[1699]: I20250710 00:27:38.525807 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 10 00:27:48.456878 update_engine[1699]: I20250710 00:27:48.456810 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:27:48.457310 update_engine[1699]: I20250710 00:27:48.457075 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:27:48.457401 update_engine[1699]: I20250710 00:27:48.457369 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:27:48.607053 update_engine[1699]: E20250710 00:27:48.606997 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:27:48.607232 update_engine[1699]: I20250710 00:27:48.607084 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 10 00:27:52.071555 systemd[1]: Started sshd@7-10.200.8.13:22-10.200.16.10:42086.service - OpenSSH per-connection server daemon (10.200.16.10:42086). Jul 10 00:27:52.706155 sshd[4472]: Accepted publickey for core from 10.200.16.10 port 42086 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:52.707237 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:52.711494 systemd-logind[1698]: New session 10 of user core. Jul 10 00:27:52.716317 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:27:53.225397 sshd[4474]: Connection closed by 10.200.16.10 port 42086 Jul 10 00:27:53.228568 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:53.231549 systemd[1]: sshd@7-10.200.8.13:22-10.200.16.10:42086.service: Deactivated successfully. Jul 10 00:27:53.233265 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:27:53.233929 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:27:53.235102 systemd-logind[1698]: Removed session 10. Jul 10 00:27:58.336497 systemd[1]: Started sshd@8-10.200.8.13:22-10.200.16.10:42088.service - OpenSSH per-connection server daemon (10.200.16.10:42088). Jul 10 00:27:58.453034 update_engine[1699]: I20250710 00:27:58.452986 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:27:58.453319 update_engine[1699]: I20250710 00:27:58.453197 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:27:58.453465 update_engine[1699]: I20250710 00:27:58.453449 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:27:58.552707 update_engine[1699]: E20250710 00:27:58.552667 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:27:58.552799 update_engine[1699]: I20250710 00:27:58.552728 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 10 00:27:58.968395 sshd[4487]: Accepted publickey for core from 10.200.16.10 port 42088 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:58.969539 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:58.973654 systemd-logind[1698]: New session 11 of user core. Jul 10 00:27:58.976323 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:27:59.462202 sshd[4489]: Connection closed by 10.200.16.10 port 42088 Jul 10 00:27:59.462743 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:59.465444 systemd[1]: sshd@8-10.200.8.13:22-10.200.16.10:42088.service: Deactivated successfully. Jul 10 00:27:59.467233 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:27:59.469069 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:27:59.470201 systemd-logind[1698]: Removed session 11. Jul 10 00:28:04.576671 systemd[1]: Started sshd@9-10.200.8.13:22-10.200.16.10:42880.service - OpenSSH per-connection server daemon (10.200.16.10:42880). Jul 10 00:28:05.206889 sshd[4502]: Accepted publickey for core from 10.200.16.10 port 42880 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:05.208050 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:05.212408 systemd-logind[1698]: New session 12 of user core. Jul 10 00:28:05.216358 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:28:05.692857 sshd[4504]: Connection closed by 10.200.16.10 port 42880 Jul 10 00:28:05.693349 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:05.696401 systemd[1]: sshd@9-10.200.8.13:22-10.200.16.10:42880.service: Deactivated successfully. Jul 10 00:28:05.697804 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:28:05.698455 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:28:05.699893 systemd-logind[1698]: Removed session 12. Jul 10 00:28:05.806009 systemd[1]: Started sshd@10-10.200.8.13:22-10.200.16.10:42892.service - OpenSSH per-connection server daemon (10.200.16.10:42892). Jul 10 00:28:06.435662 sshd[4517]: Accepted publickey for core from 10.200.16.10 port 42892 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:06.436973 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:06.441339 systemd-logind[1698]: New session 13 of user core. Jul 10 00:28:06.446343 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:28:06.950018 sshd[4520]: Connection closed by 10.200.16.10 port 42892 Jul 10 00:28:06.950542 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:06.953648 systemd[1]: sshd@10-10.200.8.13:22-10.200.16.10:42892.service: Deactivated successfully. Jul 10 00:28:06.955480 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:28:06.956274 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:28:06.957482 systemd-logind[1698]: Removed session 13. Jul 10 00:28:07.063086 systemd[1]: Started sshd@11-10.200.8.13:22-10.200.16.10:42904.service - OpenSSH per-connection server daemon (10.200.16.10:42904). Jul 10 00:28:07.693570 sshd[4530]: Accepted publickey for core from 10.200.16.10 port 42904 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:07.694899 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:07.699095 systemd-logind[1698]: New session 14 of user core. Jul 10 00:28:07.707317 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:28:08.181453 sshd[4532]: Connection closed by 10.200.16.10 port 42904 Jul 10 00:28:08.181950 sshd-session[4530]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:08.184930 systemd[1]: sshd@11-10.200.8.13:22-10.200.16.10:42904.service: Deactivated successfully. Jul 10 00:28:08.186674 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:28:08.187443 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:28:08.188566 systemd-logind[1698]: Removed session 14. Jul 10 00:28:08.451766 update_engine[1699]: I20250710 00:28:08.451457 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:28:08.451766 update_engine[1699]: I20250710 00:28:08.451719 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:28:08.452056 update_engine[1699]: I20250710 00:28:08.451938 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:28:08.555734 update_engine[1699]: E20250710 00:28:08.555678 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:28:08.555910 update_engine[1699]: I20250710 00:28:08.555757 1699 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 00:28:08.555910 update_engine[1699]: I20250710 00:28:08.555766 1699 omaha_request_action.cc:617] Omaha request response: Jul 10 00:28:08.555910 update_engine[1699]: E20250710 00:28:08.555846 1699 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 10 00:28:08.555910 update_engine[1699]: I20250710 00:28:08.555866 1699 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 10 00:28:08.555910 update_engine[1699]: I20250710 00:28:08.555872 1699 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:28:08.555910 update_engine[1699]: I20250710 00:28:08.555878 1699 update_attempter.cc:306] Processing Done. Jul 10 00:28:08.555910 update_engine[1699]: E20250710 00:28:08.555897 1699 update_attempter.cc:619] Update failed. Jul 10 00:28:08.555910 update_engine[1699]: I20250710 00:28:08.555908 1699 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 10 00:28:08.556147 update_engine[1699]: I20250710 00:28:08.555913 1699 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 10 00:28:08.556147 update_engine[1699]: I20250710 00:28:08.555922 1699 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 10 00:28:08.556147 update_engine[1699]: I20250710 00:28:08.556017 1699 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 00:28:08.556147 update_engine[1699]: I20250710 00:28:08.556046 1699 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 00:28:08.556147 update_engine[1699]: I20250710 00:28:08.556052 1699 omaha_request_action.cc:272] Request: Jul 10 00:28:08.556147 update_engine[1699]: Jul 10 00:28:08.556147 update_engine[1699]: Jul 10 00:28:08.556147 update_engine[1699]: Jul 10 00:28:08.556147 update_engine[1699]: Jul 10 00:28:08.556147 update_engine[1699]: Jul 10 00:28:08.556147 update_engine[1699]: Jul 10 00:28:08.556147 update_engine[1699]: I20250710 00:28:08.556062 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:28:08.556508 update_engine[1699]: I20250710 00:28:08.556296 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:28:08.556538 update_engine[1699]: I20250710 00:28:08.556524 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:28:08.556885 locksmithd[1776]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 10 00:28:08.599773 update_engine[1699]: E20250710 00:28:08.599732 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:28:08.599871 update_engine[1699]: I20250710 00:28:08.599797 1699 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 00:28:08.599871 update_engine[1699]: I20250710 00:28:08.599807 1699 omaha_request_action.cc:617] Omaha request response: Jul 10 00:28:08.599871 update_engine[1699]: I20250710 00:28:08.599815 1699 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:28:08.599871 update_engine[1699]: I20250710 00:28:08.599820 1699 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:28:08.599871 update_engine[1699]: I20250710 00:28:08.599825 1699 update_attempter.cc:306] Processing Done. Jul 10 00:28:08.599871 update_engine[1699]: I20250710 00:28:08.599833 1699 update_attempter.cc:310] Error event sent. Jul 10 00:28:08.599871 update_engine[1699]: I20250710 00:28:08.599843 1699 update_check_scheduler.cc:74] Next update check in 49m37s Jul 10 00:28:08.600277 locksmithd[1776]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 10 00:28:13.299556 systemd[1]: Started sshd@12-10.200.8.13:22-10.200.16.10:49572.service - OpenSSH per-connection server daemon (10.200.16.10:49572). Jul 10 00:28:13.937719 sshd[4544]: Accepted publickey for core from 10.200.16.10 port 49572 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:13.938810 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:13.942677 systemd-logind[1698]: New session 15 of user core. Jul 10 00:28:13.947329 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:28:14.427229 sshd[4546]: Connection closed by 10.200.16.10 port 49572 Jul 10 00:28:14.427753 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:14.430826 systemd[1]: sshd@12-10.200.8.13:22-10.200.16.10:49572.service: Deactivated successfully. Jul 10 00:28:14.432523 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:28:14.433199 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:28:14.434425 systemd-logind[1698]: Removed session 15. Jul 10 00:28:19.544250 systemd[1]: Started sshd@13-10.200.8.13:22-10.200.16.10:49576.service - OpenSSH per-connection server daemon (10.200.16.10:49576). Jul 10 00:28:20.175620 sshd[4560]: Accepted publickey for core from 10.200.16.10 port 49576 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:20.176920 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:20.181396 systemd-logind[1698]: New session 16 of user core. Jul 10 00:28:20.187339 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:28:20.672905 sshd[4562]: Connection closed by 10.200.16.10 port 49576 Jul 10 00:28:20.673381 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:20.675839 systemd[1]: sshd@13-10.200.8.13:22-10.200.16.10:49576.service: Deactivated successfully. Jul 10 00:28:20.677453 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:28:20.678859 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:28:20.680301 systemd-logind[1698]: Removed session 16. Jul 10 00:28:20.785047 systemd[1]: Started sshd@14-10.200.8.13:22-10.200.16.10:36840.service - OpenSSH per-connection server daemon (10.200.16.10:36840). Jul 10 00:28:21.413976 sshd[4574]: Accepted publickey for core from 10.200.16.10 port 36840 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:21.415234 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:21.418975 systemd-logind[1698]: New session 17 of user core. Jul 10 00:28:21.427274 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:28:21.952780 sshd[4576]: Connection closed by 10.200.16.10 port 36840 Jul 10 00:28:21.953366 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:21.955932 systemd[1]: sshd@14-10.200.8.13:22-10.200.16.10:36840.service: Deactivated successfully. Jul 10 00:28:21.957562 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:28:21.958876 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:28:21.959909 systemd-logind[1698]: Removed session 17. Jul 10 00:28:22.067046 systemd[1]: Started sshd@15-10.200.8.13:22-10.200.16.10:36846.service - OpenSSH per-connection server daemon (10.200.16.10:36846). Jul 10 00:28:22.695735 sshd[4587]: Accepted publickey for core from 10.200.16.10 port 36846 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:22.696796 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:22.700959 systemd-logind[1698]: New session 18 of user core. Jul 10 00:28:22.706294 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:28:23.840735 sshd[4589]: Connection closed by 10.200.16.10 port 36846 Jul 10 00:28:23.841271 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:23.844224 systemd[1]: sshd@15-10.200.8.13:22-10.200.16.10:36846.service: Deactivated successfully. Jul 10 00:28:23.845718 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:28:23.846418 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:28:23.847610 systemd-logind[1698]: Removed session 18. Jul 10 00:28:23.952563 systemd[1]: Started sshd@16-10.200.8.13:22-10.200.16.10:36860.service - OpenSSH per-connection server daemon (10.200.16.10:36860). Jul 10 00:28:24.604547 sshd[4606]: Accepted publickey for core from 10.200.16.10 port 36860 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:24.605867 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:24.610114 systemd-logind[1698]: New session 19 of user core. Jul 10 00:28:24.616298 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:28:25.178518 sshd[4608]: Connection closed by 10.200.16.10 port 36860 Jul 10 00:28:25.179391 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:25.182582 systemd[1]: sshd@16-10.200.8.13:22-10.200.16.10:36860.service: Deactivated successfully. Jul 10 00:28:25.184312 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:28:25.185089 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:28:25.186150 systemd-logind[1698]: Removed session 19. Jul 10 00:28:25.288888 systemd[1]: Started sshd@17-10.200.8.13:22-10.200.16.10:36870.service - OpenSSH per-connection server daemon (10.200.16.10:36870). Jul 10 00:28:25.919218 sshd[4618]: Accepted publickey for core from 10.200.16.10 port 36870 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:25.920517 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:25.925321 systemd-logind[1698]: New session 20 of user core. Jul 10 00:28:25.928296 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:28:26.405810 sshd[4620]: Connection closed by 10.200.16.10 port 36870 Jul 10 00:28:26.406335 sshd-session[4618]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:26.409376 systemd[1]: sshd@17-10.200.8.13:22-10.200.16.10:36870.service: Deactivated successfully. Jul 10 00:28:26.411247 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:28:26.412151 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:28:26.413403 systemd-logind[1698]: Removed session 20. Jul 10 00:28:31.522502 systemd[1]: Started sshd@18-10.200.8.13:22-10.200.16.10:50676.service - OpenSSH per-connection server daemon (10.200.16.10:50676). Jul 10 00:28:32.153585 sshd[4634]: Accepted publickey for core from 10.200.16.10 port 50676 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:32.154983 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:32.160064 systemd-logind[1698]: New session 21 of user core. Jul 10 00:28:32.167296 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:28:32.645483 sshd[4636]: Connection closed by 10.200.16.10 port 50676 Jul 10 00:28:32.646071 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:32.648895 systemd[1]: sshd@18-10.200.8.13:22-10.200.16.10:50676.service: Deactivated successfully. Jul 10 00:28:32.650477 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:28:32.651669 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:28:32.653021 systemd-logind[1698]: Removed session 21. Jul 10 00:28:37.760323 systemd[1]: Started sshd@19-10.200.8.13:22-10.200.16.10:50682.service - OpenSSH per-connection server daemon (10.200.16.10:50682). Jul 10 00:28:38.387641 sshd[4647]: Accepted publickey for core from 10.200.16.10 port 50682 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:38.388772 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:38.393561 systemd-logind[1698]: New session 22 of user core. Jul 10 00:28:38.400322 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:28:38.876227 sshd[4649]: Connection closed by 10.200.16.10 port 50682 Jul 10 00:28:38.876728 sshd-session[4647]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:38.879704 systemd[1]: sshd@19-10.200.8.13:22-10.200.16.10:50682.service: Deactivated successfully. Jul 10 00:28:38.881414 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:28:38.882284 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:28:38.883333 systemd-logind[1698]: Removed session 22. Jul 10 00:28:43.993320 systemd[1]: Started sshd@20-10.200.8.13:22-10.200.16.10:44938.service - OpenSSH per-connection server daemon (10.200.16.10:44938). Jul 10 00:28:44.623660 sshd[4661]: Accepted publickey for core from 10.200.16.10 port 44938 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:44.625029 sshd-session[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:44.629612 systemd-logind[1698]: New session 23 of user core. Jul 10 00:28:44.633310 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:28:45.110679 sshd[4663]: Connection closed by 10.200.16.10 port 44938 Jul 10 00:28:45.111155 sshd-session[4661]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:45.113749 systemd[1]: sshd@20-10.200.8.13:22-10.200.16.10:44938.service: Deactivated successfully. Jul 10 00:28:45.115442 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:28:45.117247 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:28:45.118154 systemd-logind[1698]: Removed session 23. Jul 10 00:28:45.225097 systemd[1]: Started sshd@21-10.200.8.13:22-10.200.16.10:44942.service - OpenSSH per-connection server daemon (10.200.16.10:44942). Jul 10 00:28:45.855858 sshd[4676]: Accepted publickey for core from 10.200.16.10 port 44942 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:45.857036 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:45.861115 systemd-logind[1698]: New session 24 of user core. Jul 10 00:28:45.867308 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:28:47.484412 containerd[1717]: time="2025-07-10T00:28:47.484206977Z" level=info msg="StopContainer for \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" with timeout 30 (s)" Jul 10 00:28:47.485181 containerd[1717]: time="2025-07-10T00:28:47.485099762Z" level=info msg="Stop container \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" with signal terminated" Jul 10 00:28:47.496933 systemd[1]: cri-containerd-7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789.scope: Deactivated successfully. Jul 10 00:28:47.499248 containerd[1717]: time="2025-07-10T00:28:47.499132364Z" level=info msg="received exit event container_id:\"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" id:\"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" pid:3676 exited_at:{seconds:1752107327 nanos:498866671}" Jul 10 00:28:47.499447 containerd[1717]: time="2025-07-10T00:28:47.499432459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" id:\"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" pid:3676 exited_at:{seconds:1752107327 nanos:498866671}" Jul 10 00:28:47.501485 containerd[1717]: time="2025-07-10T00:28:47.501460344Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:28:47.507058 containerd[1717]: time="2025-07-10T00:28:47.507031169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" id:\"86999dfa2a404a1fccb4d8170997d5d224e58b9cbe5cf8eafca8c2e63b486c7d\" pid:4695 exited_at:{seconds:1752107327 nanos:506475710}" Jul 10 00:28:47.508766 containerd[1717]: time="2025-07-10T00:28:47.508711983Z" level=info msg="StopContainer for \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" with timeout 2 (s)" Jul 10 00:28:47.509250 containerd[1717]: time="2025-07-10T00:28:47.509198102Z" level=info msg="Stop container \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" with signal terminated" Jul 10 00:28:47.520132 systemd-networkd[1359]: lxc_health: Link DOWN Jul 10 00:28:47.520439 systemd-networkd[1359]: lxc_health: Lost carrier Jul 10 00:28:47.524271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789-rootfs.mount: Deactivated successfully. Jul 10 00:28:47.535448 systemd[1]: cri-containerd-105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b.scope: Deactivated successfully. Jul 10 00:28:47.536025 systemd[1]: cri-containerd-105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b.scope: Consumed 5.025s CPU time, 120.7M memory peak, 152K read from disk, 13.3M written to disk. Jul 10 00:28:47.536946 containerd[1717]: time="2025-07-10T00:28:47.536870962Z" level=info msg="received exit event container_id:\"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" id:\"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" pid:3789 exited_at:{seconds:1752107327 nanos:536501018}" Jul 10 00:28:47.537051 containerd[1717]: time="2025-07-10T00:28:47.537024084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" id:\"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" pid:3789 exited_at:{seconds:1752107327 nanos:536501018}" Jul 10 00:28:47.551618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b-rootfs.mount: Deactivated successfully. Jul 10 00:28:47.578330 containerd[1717]: time="2025-07-10T00:28:47.578309494Z" level=info msg="StopContainer for \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" returns successfully" Jul 10 00:28:47.578783 containerd[1717]: time="2025-07-10T00:28:47.578764245Z" level=info msg="StopPodSandbox for \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\"" Jul 10 00:28:47.578838 containerd[1717]: time="2025-07-10T00:28:47.578817980Z" level=info msg="Container to stop \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:28:47.578838 containerd[1717]: time="2025-07-10T00:28:47.578829103Z" level=info msg="Container to stop \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:28:47.578880 containerd[1717]: time="2025-07-10T00:28:47.578836879Z" level=info msg="Container to stop \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:28:47.578880 containerd[1717]: time="2025-07-10T00:28:47.578843963Z" level=info msg="Container to stop \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:28:47.578880 containerd[1717]: time="2025-07-10T00:28:47.578851028Z" level=info msg="Container to stop \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:28:47.582076 containerd[1717]: time="2025-07-10T00:28:47.581988163Z" level=info msg="StopContainer for \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" returns successfully" Jul 10 00:28:47.582735 containerd[1717]: time="2025-07-10T00:28:47.582672524Z" level=info msg="StopPodSandbox for \"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\"" Jul 10 00:28:47.582821 containerd[1717]: time="2025-07-10T00:28:47.582808240Z" level=info msg="Container to stop \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:28:47.584611 systemd[1]: cri-containerd-7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec.scope: Deactivated successfully. Jul 10 00:28:47.587712 containerd[1717]: time="2025-07-10T00:28:47.587658887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" id:\"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" pid:3293 exit_status:137 exited_at:{seconds:1752107327 nanos:585936801}" Jul 10 00:28:47.590893 systemd[1]: cri-containerd-c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef.scope: Deactivated successfully. Jul 10 00:28:47.612030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec-rootfs.mount: Deactivated successfully. Jul 10 00:28:47.617642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef-rootfs.mount: Deactivated successfully. Jul 10 00:28:47.632567 containerd[1717]: time="2025-07-10T00:28:47.632463252Z" level=info msg="shim disconnected" id=c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef namespace=k8s.io Jul 10 00:28:47.632567 containerd[1717]: time="2025-07-10T00:28:47.632497880Z" level=warning msg="cleaning up after shim disconnected" id=c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef namespace=k8s.io Jul 10 00:28:47.632567 containerd[1717]: time="2025-07-10T00:28:47.632505813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:28:47.633087 containerd[1717]: time="2025-07-10T00:28:47.633065383Z" level=info msg="shim disconnected" id=7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec namespace=k8s.io Jul 10 00:28:47.633267 containerd[1717]: time="2025-07-10T00:28:47.633089600Z" level=warning msg="cleaning up after shim disconnected" id=7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec namespace=k8s.io Jul 10 00:28:47.633569 containerd[1717]: time="2025-07-10T00:28:47.633528186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:28:47.647235 containerd[1717]: time="2025-07-10T00:28:47.645248168Z" level=info msg="received exit event sandbox_id:\"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" exit_status:137 exited_at:{seconds:1752107327 nanos:585936801}" Jul 10 00:28:47.647235 containerd[1717]: time="2025-07-10T00:28:47.646904764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\" id:\"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\" pid:3424 exit_status:137 exited_at:{seconds:1752107327 nanos:596362277}" Jul 10 00:28:47.647486 containerd[1717]: time="2025-07-10T00:28:47.647463224Z" level=info msg="TearDown network for sandbox \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" successfully" Jul 10 00:28:47.647519 containerd[1717]: time="2025-07-10T00:28:47.647490073Z" level=info msg="StopPodSandbox for \"7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec\" returns successfully" Jul 10 00:28:47.647596 containerd[1717]: time="2025-07-10T00:28:47.647582518Z" level=info msg="received exit event sandbox_id:\"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\" exit_status:137 exited_at:{seconds:1752107327 nanos:596362277}" Jul 10 00:28:47.648599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef-shm.mount: Deactivated successfully. Jul 10 00:28:47.648711 containerd[1717]: time="2025-07-10T00:28:47.648661887Z" level=info msg="TearDown network for sandbox \"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\" successfully" Jul 10 00:28:47.648711 containerd[1717]: time="2025-07-10T00:28:47.648680796Z" level=info msg="StopPodSandbox for \"c1daf06bda35517a0d684559a777f1140451d13218e62e8cc19fed4a72cc9bef\" returns successfully" Jul 10 00:28:47.712454 kubelet[3152]: I0710 00:28:47.712437 3152 scope.go:117] "RemoveContainer" containerID="7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789" Jul 10 00:28:47.715378 containerd[1717]: time="2025-07-10T00:28:47.715149353Z" level=info msg="RemoveContainer for \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\"" Jul 10 00:28:47.722525 containerd[1717]: time="2025-07-10T00:28:47.722503215Z" level=info msg="RemoveContainer for \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" returns successfully" Jul 10 00:28:47.722745 kubelet[3152]: I0710 00:28:47.722707 3152 scope.go:117] "RemoveContainer" containerID="7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789" Jul 10 00:28:47.723032 containerd[1717]: time="2025-07-10T00:28:47.723004639Z" level=error msg="ContainerStatus for \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\": not found" Jul 10 00:28:47.723191 kubelet[3152]: E0710 00:28:47.723142 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\": not found" containerID="7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789" Jul 10 00:28:47.723278 kubelet[3152]: I0710 00:28:47.723199 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789"} err="failed to get container status \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\": rpc error: code = NotFound desc = an error occurred when try to find container \"7bf5e049148cbf5ff40bcf58bd2c11d9a39f2cde2642c47c70c39f3e6cd55789\": not found" Jul 10 00:28:47.723324 kubelet[3152]: I0710 00:28:47.723279 3152 scope.go:117] "RemoveContainer" containerID="105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b" Jul 10 00:28:47.724692 containerd[1717]: time="2025-07-10T00:28:47.724672489Z" level=info msg="RemoveContainer for \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\"" Jul 10 00:28:47.730844 containerd[1717]: time="2025-07-10T00:28:47.730812242Z" level=info msg="RemoveContainer for \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" returns successfully" Jul 10 00:28:47.730987 kubelet[3152]: I0710 00:28:47.730948 3152 scope.go:117] "RemoveContainer" containerID="6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc" Jul 10 00:28:47.732184 containerd[1717]: time="2025-07-10T00:28:47.732123072Z" level=info msg="RemoveContainer for \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\"" Jul 10 00:28:47.738423 containerd[1717]: time="2025-07-10T00:28:47.738256409Z" level=info msg="RemoveContainer for \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\" returns successfully" Jul 10 00:28:47.738590 kubelet[3152]: I0710 00:28:47.738549 3152 scope.go:117] "RemoveContainer" containerID="f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207" Jul 10 00:28:47.740620 containerd[1717]: time="2025-07-10T00:28:47.740600041Z" level=info msg="RemoveContainer for \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\"" Jul 10 00:28:47.746646 containerd[1717]: time="2025-07-10T00:28:47.746618004Z" level=info msg="RemoveContainer for \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\" returns successfully" Jul 10 00:28:47.746789 kubelet[3152]: I0710 00:28:47.746772 3152 scope.go:117] "RemoveContainer" containerID="91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079" Jul 10 00:28:47.748026 containerd[1717]: time="2025-07-10T00:28:47.748004501Z" level=info msg="RemoveContainer for \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\"" Jul 10 00:28:47.755515 containerd[1717]: time="2025-07-10T00:28:47.755462512Z" level=info msg="RemoveContainer for \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\" returns successfully" Jul 10 00:28:47.755626 kubelet[3152]: I0710 00:28:47.755598 3152 scope.go:117] "RemoveContainer" containerID="2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd" Jul 10 00:28:47.756767 containerd[1717]: time="2025-07-10T00:28:47.756742961Z" level=info msg="RemoveContainer for \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\"" Jul 10 00:28:47.762804 containerd[1717]: time="2025-07-10T00:28:47.762768915Z" level=info msg="RemoveContainer for \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\" returns successfully" Jul 10 00:28:47.762961 kubelet[3152]: I0710 00:28:47.762948 3152 scope.go:117] "RemoveContainer" containerID="105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b" Jul 10 00:28:47.763217 containerd[1717]: time="2025-07-10T00:28:47.763181980Z" level=error msg="ContainerStatus for \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\": not found" Jul 10 00:28:47.763346 kubelet[3152]: E0710 00:28:47.763296 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\": not found" containerID="105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b" Jul 10 00:28:47.763346 kubelet[3152]: I0710 00:28:47.763319 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b"} err="failed to get container status \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"105b433a2e14d8385cc2dee294e554d14f3695dd79291411ac4d0e8d151f6a8b\": not found" Jul 10 00:28:47.763417 kubelet[3152]: I0710 00:28:47.763351 3152 scope.go:117] "RemoveContainer" containerID="6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc" Jul 10 00:28:47.763552 containerd[1717]: time="2025-07-10T00:28:47.763527795Z" level=error msg="ContainerStatus for \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\": not found" Jul 10 00:28:47.763643 kubelet[3152]: E0710 00:28:47.763625 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\": not found" containerID="6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc" Jul 10 00:28:47.763673 kubelet[3152]: I0710 00:28:47.763644 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc"} err="failed to get container status \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a9acad0b1299ae31835755527622873a21d960caeeee4ed32ccef86afb487dc\": not found" Jul 10 00:28:47.763673 kubelet[3152]: I0710 00:28:47.763658 3152 scope.go:117] "RemoveContainer" containerID="f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207" Jul 10 00:28:47.763828 containerd[1717]: time="2025-07-10T00:28:47.763793941Z" level=error msg="ContainerStatus for \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\": not found" Jul 10 00:28:47.763892 kubelet[3152]: E0710 00:28:47.763874 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\": not found" containerID="f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207" Jul 10 00:28:47.763920 kubelet[3152]: I0710 00:28:47.763893 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207"} err="failed to get container status \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\": rpc error: code = NotFound desc = an error occurred when try to find container \"f428b77a29151dae1392d7af94ad2a64abd95632e5121a1f401bd6b8bdac4207\": not found" Jul 10 00:28:47.763920 kubelet[3152]: I0710 00:28:47.763909 3152 scope.go:117] "RemoveContainer" containerID="91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079" Jul 10 00:28:47.764049 containerd[1717]: time="2025-07-10T00:28:47.764028462Z" level=error msg="ContainerStatus for \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\": not found" Jul 10 00:28:47.764131 kubelet[3152]: E0710 00:28:47.764114 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\": not found" containerID="91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079" Jul 10 00:28:47.764170 kubelet[3152]: I0710 00:28:47.764133 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079"} err="failed to get container status \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\": rpc error: code = NotFound desc = an error occurred when try to find container \"91cfe6430a1f6aa7f261237fb2b59c2f1a8cfa310e14a1c95644d56b425b4079\": not found" Jul 10 00:28:47.764170 kubelet[3152]: I0710 00:28:47.764148 3152 scope.go:117] "RemoveContainer" containerID="2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd" Jul 10 00:28:47.764337 containerd[1717]: time="2025-07-10T00:28:47.764314348Z" level=error msg="ContainerStatus for \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\": not found" Jul 10 00:28:47.764432 kubelet[3152]: E0710 00:28:47.764409 3152 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\": not found" containerID="2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd" Jul 10 00:28:47.764480 kubelet[3152]: I0710 00:28:47.764435 3152 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd"} err="failed to get container status \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"2affcff397b4d791df5490e632d5ad7af294f499077ceeda7c8f2530e2cb7cdd\": not found" Jul 10 00:28:47.778586 kubelet[3152]: I0710 00:28:47.778562 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-run\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778626 kubelet[3152]: I0710 00:28:47.778596 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cni-path\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778626 kubelet[3152]: I0710 00:28:47.778611 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-etc-cni-netd\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778676 kubelet[3152]: I0710 00:28:47.778627 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-xtables-lock\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778676 kubelet[3152]: I0710 00:28:47.778650 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4mnj\" (UniqueName: \"kubernetes.io/projected/75e91e55-fa7a-4496-a07c-ad28726f943d-kube-api-access-j4mnj\") pod \"75e91e55-fa7a-4496-a07c-ad28726f943d\" (UID: \"75e91e55-fa7a-4496-a07c-ad28726f943d\") " Jul 10 00:28:47.778676 kubelet[3152]: I0710 00:28:47.778668 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-host-proc-sys-kernel\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778742 kubelet[3152]: I0710 00:28:47.778687 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75e91e55-fa7a-4496-a07c-ad28726f943d-cilium-config-path\") pod \"75e91e55-fa7a-4496-a07c-ad28726f943d\" (UID: \"75e91e55-fa7a-4496-a07c-ad28726f943d\") " Jul 10 00:28:47.778742 kubelet[3152]: I0710 00:28:47.778707 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq98n\" (UniqueName: \"kubernetes.io/projected/6cf73113-592c-4607-94f2-66abe0c5ecee-kube-api-access-nq98n\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778742 kubelet[3152]: I0710 00:28:47.778726 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-config-path\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778813 kubelet[3152]: I0710 00:28:47.778741 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-lib-modules\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778813 kubelet[3152]: I0710 00:28:47.778759 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cf73113-592c-4607-94f2-66abe0c5ecee-clustermesh-secrets\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778813 kubelet[3152]: I0710 00:28:47.778774 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-host-proc-sys-net\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778813 kubelet[3152]: I0710 00:28:47.778789 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-hostproc\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778813 kubelet[3152]: I0710 00:28:47.778805 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-bpf-maps\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778913 kubelet[3152]: I0710 00:28:47.778823 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cf73113-592c-4607-94f2-66abe0c5ecee-hubble-tls\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778913 kubelet[3152]: I0710 00:28:47.778838 3152 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-cgroup\") pod \"6cf73113-592c-4607-94f2-66abe0c5ecee\" (UID: \"6cf73113-592c-4607-94f2-66abe0c5ecee\") " Jul 10 00:28:47.778913 kubelet[3152]: I0710 00:28:47.778890 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.778973 kubelet[3152]: I0710 00:28:47.778920 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.778973 kubelet[3152]: I0710 00:28:47.778934 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cni-path" (OuterVolumeSpecName: "cni-path") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.778973 kubelet[3152]: I0710 00:28:47.778946 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.778973 kubelet[3152]: I0710 00:28:47.778957 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.780178 kubelet[3152]: I0710 00:28:47.779074 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.780178 kubelet[3152]: I0710 00:28:47.779094 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.781058 kubelet[3152]: I0710 00:28:47.781032 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e91e55-fa7a-4496-a07c-ad28726f943d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "75e91e55-fa7a-4496-a07c-ad28726f943d" (UID: "75e91e55-fa7a-4496-a07c-ad28726f943d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:28:47.782381 kubelet[3152]: I0710 00:28:47.781133 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.782467 kubelet[3152]: I0710 00:28:47.781145 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-hostproc" (OuterVolumeSpecName: "hostproc") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.782557 kubelet[3152]: I0710 00:28:47.781171 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:28:47.782589 kubelet[3152]: I0710 00:28:47.782527 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e91e55-fa7a-4496-a07c-ad28726f943d-kube-api-access-j4mnj" (OuterVolumeSpecName: "kube-api-access-j4mnj") pod "75e91e55-fa7a-4496-a07c-ad28726f943d" (UID: "75e91e55-fa7a-4496-a07c-ad28726f943d"). InnerVolumeSpecName "kube-api-access-j4mnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:28:47.784108 kubelet[3152]: I0710 00:28:47.784063 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf73113-592c-4607-94f2-66abe0c5ecee-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:28:47.784830 kubelet[3152]: I0710 00:28:47.784179 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf73113-592c-4607-94f2-66abe0c5ecee-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:28:47.785245 kubelet[3152]: I0710 00:28:47.785154 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf73113-592c-4607-94f2-66abe0c5ecee-kube-api-access-nq98n" (OuterVolumeSpecName: "kube-api-access-nq98n") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "kube-api-access-nq98n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:28:47.785324 kubelet[3152]: I0710 00:28:47.785314 3152 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6cf73113-592c-4607-94f2-66abe0c5ecee" (UID: "6cf73113-592c-4607-94f2-66abe0c5ecee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:28:47.880020 kubelet[3152]: I0710 00:28:47.879987 3152 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-xtables-lock\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880020 kubelet[3152]: I0710 00:28:47.880009 3152 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j4mnj\" (UniqueName: \"kubernetes.io/projected/75e91e55-fa7a-4496-a07c-ad28726f943d-kube-api-access-j4mnj\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880020 kubelet[3152]: I0710 00:28:47.880019 3152 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-host-proc-sys-kernel\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880152 kubelet[3152]: I0710 00:28:47.880029 3152 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nq98n\" (UniqueName: \"kubernetes.io/projected/6cf73113-592c-4607-94f2-66abe0c5ecee-kube-api-access-nq98n\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880152 kubelet[3152]: I0710 00:28:47.880039 3152 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75e91e55-fa7a-4496-a07c-ad28726f943d-cilium-config-path\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880152 kubelet[3152]: I0710 00:28:47.880050 3152 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-lib-modules\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880152 kubelet[3152]: I0710 00:28:47.880061 3152 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cf73113-592c-4607-94f2-66abe0c5ecee-clustermesh-secrets\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880152 kubelet[3152]: I0710 00:28:47.880070 3152 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-host-proc-sys-net\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880152 kubelet[3152]: I0710 00:28:47.880079 3152 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-hostproc\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880152 kubelet[3152]: I0710 00:28:47.880088 3152 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-config-path\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880152 kubelet[3152]: I0710 00:28:47.880125 3152 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cf73113-592c-4607-94f2-66abe0c5ecee-hubble-tls\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880333 kubelet[3152]: I0710 00:28:47.880137 3152 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-bpf-maps\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880333 kubelet[3152]: I0710 00:28:47.880146 3152 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-cgroup\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880333 kubelet[3152]: I0710 00:28:47.880169 3152 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cilium-run\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880333 kubelet[3152]: I0710 00:28:47.880178 3152 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-cni-path\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:47.880333 kubelet[3152]: I0710 00:28:47.880187 3152 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cf73113-592c-4607-94f2-66abe0c5ecee-etc-cni-netd\") on node \"ci-4344.1.1-n-e449e01ea1\" DevicePath \"\"" Jul 10 00:28:48.016087 systemd[1]: Removed slice kubepods-besteffort-pod75e91e55_fa7a_4496_a07c_ad28726f943d.slice - libcontainer container kubepods-besteffort-pod75e91e55_fa7a_4496_a07c_ad28726f943d.slice. Jul 10 00:28:48.022065 systemd[1]: Removed slice kubepods-burstable-pod6cf73113_592c_4607_94f2_66abe0c5ecee.slice - libcontainer container kubepods-burstable-pod6cf73113_592c_4607_94f2_66abe0c5ecee.slice. Jul 10 00:28:48.022254 systemd[1]: kubepods-burstable-pod6cf73113_592c_4607_94f2_66abe0c5ecee.slice: Consumed 5.097s CPU time, 121.1M memory peak, 152K read from disk, 13.3M written to disk. Jul 10 00:28:48.524799 systemd[1]: var-lib-kubelet-pods-75e91e55\x2dfa7a\x2d4496\x2da07c\x2dad28726f943d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj4mnj.mount: Deactivated successfully. Jul 10 00:28:48.525279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7004c8fd1b7a61cb43aeebb89490495f757c9fe6067e164be62278bf5b92c5ec-shm.mount: Deactivated successfully. Jul 10 00:28:48.525378 systemd[1]: var-lib-kubelet-pods-6cf73113\x2d592c\x2d4607\x2d94f2\x2d66abe0c5ecee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnq98n.mount: Deactivated successfully. Jul 10 00:28:48.525472 systemd[1]: var-lib-kubelet-pods-6cf73113\x2d592c\x2d4607\x2d94f2\x2d66abe0c5ecee-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:28:48.525561 systemd[1]: var-lib-kubelet-pods-6cf73113\x2d592c\x2d4607\x2d94f2\x2d66abe0c5ecee-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:28:49.365707 kubelet[3152]: I0710 00:28:49.365427 3152 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf73113-592c-4607-94f2-66abe0c5ecee" path="/var/lib/kubelet/pods/6cf73113-592c-4607-94f2-66abe0c5ecee/volumes" Jul 10 00:28:49.366673 kubelet[3152]: I0710 00:28:49.366429 3152 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e91e55-fa7a-4496-a07c-ad28726f943d" path="/var/lib/kubelet/pods/75e91e55-fa7a-4496-a07c-ad28726f943d/volumes" Jul 10 00:28:49.531673 sshd[4678]: Connection closed by 10.200.16.10 port 44942 Jul 10 00:28:49.532306 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:49.535514 systemd[1]: sshd@21-10.200.8.13:22-10.200.16.10:44942.service: Deactivated successfully. Jul 10 00:28:49.537689 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:28:49.538811 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:28:49.540131 systemd-logind[1698]: Removed session 24. Jul 10 00:28:49.647984 systemd[1]: Started sshd@22-10.200.8.13:22-10.200.16.10:44950.service - OpenSSH per-connection server daemon (10.200.16.10:44950). Jul 10 00:28:50.284124 sshd[4828]: Accepted publickey for core from 10.200.16.10 port 44950 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:50.285465 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:50.289216 systemd-logind[1698]: New session 25 of user core. Jul 10 00:28:50.295329 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:28:50.468107 kubelet[3152]: E0710 00:28:50.468071 3152 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:28:50.967124 kubelet[3152]: I0710 00:28:50.967084 3152 memory_manager.go:355] "RemoveStaleState removing state" podUID="75e91e55-fa7a-4496-a07c-ad28726f943d" containerName="cilium-operator" Jul 10 00:28:50.967124 kubelet[3152]: I0710 00:28:50.967121 3152 memory_manager.go:355] "RemoveStaleState removing state" podUID="6cf73113-592c-4607-94f2-66abe0c5ecee" containerName="cilium-agent" Jul 10 00:28:50.977900 systemd[1]: Created slice kubepods-burstable-pode4f7287d_3f5c_4b0d_9072_d4fec908252f.slice - libcontainer container kubepods-burstable-pode4f7287d_3f5c_4b0d_9072_d4fec908252f.slice. Jul 10 00:28:51.039551 sshd[4830]: Connection closed by 10.200.16.10 port 44950 Jul 10 00:28:51.039964 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:51.042867 systemd[1]: sshd@22-10.200.8.13:22-10.200.16.10:44950.service: Deactivated successfully. Jul 10 00:28:51.044484 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:28:51.045320 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:28:51.046479 systemd-logind[1698]: Removed session 25. Jul 10 00:28:51.094783 kubelet[3152]: I0710 00:28:51.094737 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4f7287d-3f5c-4b0d-9072-d4fec908252f-clustermesh-secrets\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.094783 kubelet[3152]: I0710 00:28:51.094766 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-host-proc-sys-kernel\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.094783 kubelet[3152]: I0710 00:28:51.094787 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e4f7287d-3f5c-4b0d-9072-d4fec908252f-cilium-ipsec-secrets\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095010 kubelet[3152]: I0710 00:28:51.094801 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-host-proc-sys-net\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095010 kubelet[3152]: I0710 00:28:51.094815 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-cilium-run\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095010 kubelet[3152]: I0710 00:28:51.094831 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-bpf-maps\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095010 kubelet[3152]: I0710 00:28:51.094845 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-cilium-cgroup\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095010 kubelet[3152]: I0710 00:28:51.094858 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-xtables-lock\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095010 kubelet[3152]: I0710 00:28:51.094871 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-hostproc\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095098 kubelet[3152]: I0710 00:28:51.094887 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-etc-cni-netd\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095098 kubelet[3152]: I0710 00:28:51.094900 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gspc\" (UniqueName: \"kubernetes.io/projected/e4f7287d-3f5c-4b0d-9072-d4fec908252f-kube-api-access-4gspc\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095098 kubelet[3152]: I0710 00:28:51.094915 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-cni-path\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095098 kubelet[3152]: I0710 00:28:51.094931 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4f7287d-3f5c-4b0d-9072-d4fec908252f-lib-modules\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095098 kubelet[3152]: I0710 00:28:51.094947 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4f7287d-3f5c-4b0d-9072-d4fec908252f-cilium-config-path\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.095098 kubelet[3152]: I0710 00:28:51.094961 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4f7287d-3f5c-4b0d-9072-d4fec908252f-hubble-tls\") pod \"cilium-hcscc\" (UID: \"e4f7287d-3f5c-4b0d-9072-d4fec908252f\") " pod="kube-system/cilium-hcscc" Jul 10 00:28:51.158303 systemd[1]: Started sshd@23-10.200.8.13:22-10.200.16.10:50428.service - OpenSSH per-connection server daemon (10.200.16.10:50428). Jul 10 00:28:51.281670 containerd[1717]: time="2025-07-10T00:28:51.281415823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hcscc,Uid:e4f7287d-3f5c-4b0d-9072-d4fec908252f,Namespace:kube-system,Attempt:0,}" Jul 10 00:28:51.316360 containerd[1717]: time="2025-07-10T00:28:51.316322544Z" level=info msg="connecting to shim dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6" address="unix:///run/containerd/s/4749d89e2f105f2c98bfd5f3e982a6fa102156439a51a7775a7b44721fb80c35" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:28:51.341306 systemd[1]: Started cri-containerd-dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6.scope - libcontainer container dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6. Jul 10 00:28:51.362887 containerd[1717]: time="2025-07-10T00:28:51.362866102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hcscc,Uid:e4f7287d-3f5c-4b0d-9072-d4fec908252f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\"" Jul 10 00:28:51.365081 containerd[1717]: time="2025-07-10T00:28:51.365055388Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:28:51.376790 containerd[1717]: time="2025-07-10T00:28:51.376205216Z" level=info msg="Container 9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:28:51.386869 containerd[1717]: time="2025-07-10T00:28:51.386845515Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df\"" Jul 10 00:28:51.387273 containerd[1717]: time="2025-07-10T00:28:51.387240460Z" level=info msg="StartContainer for \"9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df\"" Jul 10 00:28:51.388257 containerd[1717]: time="2025-07-10T00:28:51.388218576Z" level=info msg="connecting to shim 9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df" address="unix:///run/containerd/s/4749d89e2f105f2c98bfd5f3e982a6fa102156439a51a7775a7b44721fb80c35" protocol=ttrpc version=3 Jul 10 00:28:51.404279 systemd[1]: Started cri-containerd-9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df.scope - libcontainer container 9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df. Jul 10 00:28:51.429701 containerd[1717]: time="2025-07-10T00:28:51.429665455Z" level=info msg="StartContainer for \"9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df\" returns successfully" Jul 10 00:28:51.430557 systemd[1]: cri-containerd-9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df.scope: Deactivated successfully. Jul 10 00:28:51.432333 containerd[1717]: time="2025-07-10T00:28:51.432309565Z" level=info msg="received exit event container_id:\"9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df\" id:\"9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df\" pid:4906 exited_at:{seconds:1752107331 nanos:432071149}" Jul 10 00:28:51.432454 containerd[1717]: time="2025-07-10T00:28:51.432378039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df\" id:\"9e45895f82d28b0bb0c37f79bb135f7ae38657669b76e70173665693f163e8df\" pid:4906 exited_at:{seconds:1752107331 nanos:432071149}" Jul 10 00:28:51.730704 containerd[1717]: time="2025-07-10T00:28:51.730349730Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:28:51.746626 containerd[1717]: time="2025-07-10T00:28:51.746599789Z" level=info msg="Container 516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:28:51.757847 containerd[1717]: time="2025-07-10T00:28:51.757749994Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64\"" Jul 10 00:28:51.758443 containerd[1717]: time="2025-07-10T00:28:51.758305623Z" level=info msg="StartContainer for \"516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64\"" Jul 10 00:28:51.759547 containerd[1717]: time="2025-07-10T00:28:51.759509997Z" level=info msg="connecting to shim 516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64" address="unix:///run/containerd/s/4749d89e2f105f2c98bfd5f3e982a6fa102156439a51a7775a7b44721fb80c35" protocol=ttrpc version=3 Jul 10 00:28:51.779303 systemd[1]: Started cri-containerd-516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64.scope - libcontainer container 516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64. Jul 10 00:28:51.787647 sshd[4841]: Accepted publickey for core from 10.200.16.10 port 50428 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:51.788736 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:51.794224 systemd-logind[1698]: New session 26 of user core. Jul 10 00:28:51.798368 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:28:51.811624 systemd[1]: cri-containerd-516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64.scope: Deactivated successfully. Jul 10 00:28:51.813018 containerd[1717]: time="2025-07-10T00:28:51.812993875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64\" id:\"516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64\" pid:4950 exited_at:{seconds:1752107331 nanos:812603144}" Jul 10 00:28:51.813119 containerd[1717]: time="2025-07-10T00:28:51.813007747Z" level=info msg="received exit event container_id:\"516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64\" id:\"516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64\" pid:4950 exited_at:{seconds:1752107331 nanos:812603144}" Jul 10 00:28:51.814002 containerd[1717]: time="2025-07-10T00:28:51.813985097Z" level=info msg="StartContainer for \"516c23502bea9e17994d918f65db3584e31bf3adc9ee9dcec7aeb2ff37c95a64\" returns successfully" Jul 10 00:28:52.241120 sshd[4962]: Connection closed by 10.200.16.10 port 50428 Jul 10 00:28:52.241669 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:52.244640 systemd[1]: sshd@23-10.200.8.13:22-10.200.16.10:50428.service: Deactivated successfully. Jul 10 00:28:52.246314 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:28:52.247069 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:28:52.248322 systemd-logind[1698]: Removed session 26. Jul 10 00:28:52.353073 systemd[1]: Started sshd@24-10.200.8.13:22-10.200.16.10:50440.service - OpenSSH per-connection server daemon (10.200.16.10:50440). Jul 10 00:28:52.736747 containerd[1717]: time="2025-07-10T00:28:52.736649089Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:28:52.760999 containerd[1717]: time="2025-07-10T00:28:52.759266467Z" level=info msg="Container ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:28:52.773686 containerd[1717]: time="2025-07-10T00:28:52.773658393Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a\"" Jul 10 00:28:52.774834 containerd[1717]: time="2025-07-10T00:28:52.774064430Z" level=info msg="StartContainer for \"ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a\"" Jul 10 00:28:52.775855 containerd[1717]: time="2025-07-10T00:28:52.775798077Z" level=info msg="connecting to shim ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a" address="unix:///run/containerd/s/4749d89e2f105f2c98bfd5f3e982a6fa102156439a51a7775a7b44721fb80c35" protocol=ttrpc version=3 Jul 10 00:28:52.795400 systemd[1]: Started cri-containerd-ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a.scope - libcontainer container ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a. Jul 10 00:28:52.821804 systemd[1]: cri-containerd-ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a.scope: Deactivated successfully. Jul 10 00:28:52.822970 containerd[1717]: time="2025-07-10T00:28:52.822946420Z" level=info msg="received exit event container_id:\"ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a\" id:\"ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a\" pid:5003 exited_at:{seconds:1752107332 nanos:822623626}" Jul 10 00:28:52.823505 containerd[1717]: time="2025-07-10T00:28:52.823460848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a\" id:\"ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a\" pid:5003 exited_at:{seconds:1752107332 nanos:822623626}" Jul 10 00:28:52.824862 containerd[1717]: time="2025-07-10T00:28:52.824842183Z" level=info msg="StartContainer for \"ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a\" returns successfully" Jul 10 00:28:52.838746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad8e4d98a8f4f9a79a3403da12950b532da2cf342facad7d67209fed19b8eb6a-rootfs.mount: Deactivated successfully. Jul 10 00:28:52.985995 sshd[4989]: Accepted publickey for core from 10.200.16.10 port 50440 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:28:52.987093 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:52.991227 systemd-logind[1698]: New session 27 of user core. Jul 10 00:28:52.994296 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:28:53.742276 containerd[1717]: time="2025-07-10T00:28:53.742142469Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:28:53.760975 containerd[1717]: time="2025-07-10T00:28:53.760943349Z" level=info msg="Container d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:28:53.767734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351319011.mount: Deactivated successfully. Jul 10 00:28:53.778684 containerd[1717]: time="2025-07-10T00:28:53.778601525Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0\"" Jul 10 00:28:53.779602 containerd[1717]: time="2025-07-10T00:28:53.779565962Z" level=info msg="StartContainer for \"d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0\"" Jul 10 00:28:53.781384 containerd[1717]: time="2025-07-10T00:28:53.781358961Z" level=info msg="connecting to shim d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0" address="unix:///run/containerd/s/4749d89e2f105f2c98bfd5f3e982a6fa102156439a51a7775a7b44721fb80c35" protocol=ttrpc version=3 Jul 10 00:28:53.801327 systemd[1]: Started cri-containerd-d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0.scope - libcontainer container d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0. Jul 10 00:28:53.820777 systemd[1]: cri-containerd-d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0.scope: Deactivated successfully. Jul 10 00:28:53.821553 containerd[1717]: time="2025-07-10T00:28:53.821287779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0\" id:\"d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0\" pid:5054 exited_at:{seconds:1752107333 nanos:821004465}" Jul 10 00:28:53.824587 containerd[1717]: time="2025-07-10T00:28:53.824485617Z" level=info msg="received exit event container_id:\"d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0\" id:\"d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0\" pid:5054 exited_at:{seconds:1752107333 nanos:821004465}" Jul 10 00:28:53.831251 containerd[1717]: time="2025-07-10T00:28:53.831213419Z" level=info msg="StartContainer for \"d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0\" returns successfully" Jul 10 00:28:53.841340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d71eb55f27945bbd7b95238baad39b3ffe4adad02b8a97ddcc86904da6bb2ec0-rootfs.mount: Deactivated successfully. Jul 10 00:28:54.748205 containerd[1717]: time="2025-07-10T00:28:54.748009425Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:28:54.772343 containerd[1717]: time="2025-07-10T00:28:54.769897835Z" level=info msg="Container f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:28:54.776135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694982288.mount: Deactivated successfully. Jul 10 00:28:54.786314 containerd[1717]: time="2025-07-10T00:28:54.786285750Z" level=info msg="CreateContainer within sandbox \"dd18713b7c4b24d7f4025346559604dc52f2a9eb413dfc8f9e7455254fe796c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2\"" Jul 10 00:28:54.786613 containerd[1717]: time="2025-07-10T00:28:54.786596719Z" level=info msg="StartContainer for \"f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2\"" Jul 10 00:28:54.789363 containerd[1717]: time="2025-07-10T00:28:54.788615080Z" level=info msg="connecting to shim f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2" address="unix:///run/containerd/s/4749d89e2f105f2c98bfd5f3e982a6fa102156439a51a7775a7b44721fb80c35" protocol=ttrpc version=3 Jul 10 00:28:54.809318 systemd[1]: Started cri-containerd-f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2.scope - libcontainer container f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2. Jul 10 00:28:54.836040 containerd[1717]: time="2025-07-10T00:28:54.836017733Z" level=info msg="StartContainer for \"f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2\" returns successfully" Jul 10 00:28:54.892188 containerd[1717]: time="2025-07-10T00:28:54.892146885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2\" id:\"699bb460b45deb1bcb08a88add7e5ab8e52469044670ff26869ee249afc55212\" pid:5121 exited_at:{seconds:1752107334 nanos:891920892}" Jul 10 00:28:55.158183 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jul 10 00:28:57.549856 systemd-networkd[1359]: lxc_health: Link UP Jul 10 00:28:57.556942 systemd-networkd[1359]: lxc_health: Gained carrier Jul 10 00:28:57.603504 containerd[1717]: time="2025-07-10T00:28:57.603459579Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2\" id:\"644ce025987009d0a6a87ee0602da6a16c270fd49e8b702dec9f812c0c44256a\" pid:5592 exit_status:1 exited_at:{seconds:1752107337 nanos:602925938}" Jul 10 00:28:58.682403 systemd-networkd[1359]: lxc_health: Gained IPv6LL Jul 10 00:28:59.306407 kubelet[3152]: I0710 00:28:59.306299 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hcscc" podStartSLOduration=9.306280246 podStartE2EDuration="9.306280246s" podCreationTimestamp="2025-07-10 00:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:28:55.765646169 +0000 UTC m=+160.489555638" watchObservedRunningTime="2025-07-10 00:28:59.306280246 +0000 UTC m=+164.030189939" Jul 10 00:28:59.721272 containerd[1717]: time="2025-07-10T00:28:59.720779018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2\" id:\"53fffe19e2ac4e896faf4da4cdb552c8e12226434c07529e9e92fc52fcea9ffd\" pid:5644 exited_at:{seconds:1752107339 nanos:720220392}" Jul 10 00:29:01.809058 containerd[1717]: time="2025-07-10T00:29:01.809013614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2\" id:\"5a25168a657787177bb77be82aa1e51f6262eaa7a3214751a82bc354b90b7741\" pid:5686 exited_at:{seconds:1752107341 nanos:808568381}" Jul 10 00:29:03.888950 containerd[1717]: time="2025-07-10T00:29:03.888507153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f564695ab2603f0413f7558b988fb3ae9802271a65906eabec4edac2a5ff93a2\" id:\"28302e068e560bbde29368410acf3e64d8c2ddba00e3898cc34688650a351339\" pid:5711 exited_at:{seconds:1752107343 nanos:888238960}" Jul 10 00:29:03.990635 sshd[5031]: Connection closed by 10.200.16.10 port 50440 Jul 10 00:29:03.991142 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:03.993860 systemd[1]: sshd@24-10.200.8.13:22-10.200.16.10:50440.service: Deactivated successfully. Jul 10 00:29:03.995438 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:29:03.996856 systemd-logind[1698]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:29:03.998054 systemd-logind[1698]: Removed session 27.