Jul 1 08:33:46.943802 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jun 30 19:26:54 -00 2025 Jul 1 08:33:46.943826 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:33:46.943837 kernel: BIOS-provided physical RAM map: Jul 1 08:33:46.943847 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 1 08:33:46.943854 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 1 08:33:46.943862 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 1 08:33:46.943870 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 1 08:33:46.943878 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 1 08:33:46.943886 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 1 08:33:46.943893 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 1 08:33:46.943901 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 1 08:33:46.943909 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 1 08:33:46.943918 kernel: NX (Execute Disable) protection: active Jul 1 08:33:46.943926 kernel: APIC: Static calls initialized Jul 1 08:33:46.943935 kernel: SMBIOS 3.0.0 present. Jul 1 08:33:46.943943 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 1 08:33:46.943951 kernel: DMI: Memory slots populated: 1/1 Jul 1 08:33:46.943961 kernel: Hypervisor detected: KVM Jul 1 08:33:46.943969 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 1 08:33:46.943977 kernel: kvm-clock: using sched offset of 4909632261 cycles Jul 1 08:33:46.943985 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 1 08:33:46.943994 kernel: tsc: Detected 1996.249 MHz processor Jul 1 08:33:46.944002 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 1 08:33:46.944011 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 1 08:33:46.944020 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 1 08:33:46.944028 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 1 08:33:46.944038 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 1 08:33:46.944047 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 1 08:33:46.944055 kernel: ACPI: Early table checksum verification disabled Jul 1 08:33:46.944063 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 1 08:33:46.944071 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:33:46.944080 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:33:46.944088 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:33:46.944096 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 1 08:33:46.944105 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:33:46.944115 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:33:46.944123 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 1 08:33:46.944132 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 1 08:33:46.944140 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 1 08:33:46.944148 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 1 08:33:46.944159 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 1 08:33:46.944168 kernel: No NUMA configuration found Jul 1 08:33:46.944178 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 1 08:33:46.944187 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jul 1 08:33:46.944196 kernel: Zone ranges: Jul 1 08:33:46.944204 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 1 08:33:46.944213 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 1 08:33:46.944221 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 1 08:33:46.944230 kernel: Device empty Jul 1 08:33:46.944238 kernel: Movable zone start for each node Jul 1 08:33:46.944273 kernel: Early memory node ranges Jul 1 08:33:46.944283 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 1 08:33:46.944291 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 1 08:33:46.944300 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 1 08:33:46.944309 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 1 08:33:46.944317 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 1 08:33:46.944326 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 1 08:33:46.944334 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 1 08:33:46.944343 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 1 08:33:46.944354 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 1 08:33:46.944363 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 1 08:33:46.944372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 1 08:33:46.944380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 1 08:33:46.944389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 1 08:33:46.944397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 1 08:33:46.944406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 1 08:33:46.944415 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 1 08:33:46.944423 kernel: CPU topo: Max. logical packages: 2 Jul 1 08:33:46.944433 kernel: CPU topo: Max. logical dies: 2 Jul 1 08:33:46.944442 kernel: CPU topo: Max. dies per package: 1 Jul 1 08:33:46.944450 kernel: CPU topo: Max. threads per core: 1 Jul 1 08:33:46.944459 kernel: CPU topo: Num. cores per package: 1 Jul 1 08:33:46.944467 kernel: CPU topo: Num. threads per package: 1 Jul 1 08:33:46.944476 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 1 08:33:46.944484 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 1 08:33:46.944493 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 1 08:33:46.944501 kernel: Booting paravirtualized kernel on KVM Jul 1 08:33:46.944512 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 1 08:33:46.944521 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 1 08:33:46.944529 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 1 08:33:46.944538 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 1 08:33:46.944546 kernel: pcpu-alloc: [0] 0 1 Jul 1 08:33:46.944554 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 1 08:33:46.944564 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:33:46.944574 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 1 08:33:46.944584 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 1 08:33:46.944593 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 1 08:33:46.944601 kernel: Fallback order for Node 0: 0 Jul 1 08:33:46.944610 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 1 08:33:46.944619 kernel: Policy zone: Normal Jul 1 08:33:46.944627 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 1 08:33:46.944636 kernel: software IO TLB: area num 2. Jul 1 08:33:46.944645 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 1 08:33:46.944653 kernel: ftrace: allocating 40095 entries in 157 pages Jul 1 08:33:46.944663 kernel: ftrace: allocated 157 pages with 5 groups Jul 1 08:33:46.944672 kernel: Dynamic Preempt: voluntary Jul 1 08:33:46.944680 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 1 08:33:46.944690 kernel: rcu: RCU event tracing is enabled. Jul 1 08:33:46.944699 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 1 08:33:46.944708 kernel: Trampoline variant of Tasks RCU enabled. Jul 1 08:33:46.944716 kernel: Rude variant of Tasks RCU enabled. Jul 1 08:33:46.944725 kernel: Tracing variant of Tasks RCU enabled. Jul 1 08:33:46.944734 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 1 08:33:46.944743 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 1 08:33:46.944753 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 1 08:33:46.944762 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 1 08:33:46.944771 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 1 08:33:46.944780 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 1 08:33:46.944788 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 1 08:33:46.944797 kernel: Console: colour VGA+ 80x25 Jul 1 08:33:46.944805 kernel: printk: legacy console [tty0] enabled Jul 1 08:33:46.944814 kernel: printk: legacy console [ttyS0] enabled Jul 1 08:33:46.944824 kernel: ACPI: Core revision 20240827 Jul 1 08:33:46.944833 kernel: APIC: Switch to symmetric I/O mode setup Jul 1 08:33:46.944841 kernel: x2apic enabled Jul 1 08:33:46.944850 kernel: APIC: Switched APIC routing to: physical x2apic Jul 1 08:33:46.944858 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 1 08:33:46.944867 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 1 08:33:46.944881 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 1 08:33:46.944892 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 1 08:33:46.944901 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 1 08:33:46.944910 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 1 08:33:46.944918 kernel: Spectre V2 : Mitigation: Retpolines Jul 1 08:33:46.944928 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 1 08:33:46.944938 kernel: Speculative Store Bypass: Vulnerable Jul 1 08:33:46.944947 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 1 08:33:46.944956 kernel: Freeing SMP alternatives memory: 32K Jul 1 08:33:46.944965 kernel: pid_max: default: 32768 minimum: 301 Jul 1 08:33:46.944974 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 1 08:33:46.944984 kernel: landlock: Up and running. Jul 1 08:33:46.944993 kernel: SELinux: Initializing. Jul 1 08:33:46.945002 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 08:33:46.945012 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 08:33:46.945021 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 1 08:33:46.945030 kernel: Performance Events: AMD PMU driver. Jul 1 08:33:46.945039 kernel: ... version: 0 Jul 1 08:33:46.945048 kernel: ... bit width: 48 Jul 1 08:33:46.945056 kernel: ... generic registers: 4 Jul 1 08:33:46.945068 kernel: ... value mask: 0000ffffffffffff Jul 1 08:33:46.945077 kernel: ... max period: 00007fffffffffff Jul 1 08:33:46.945086 kernel: ... fixed-purpose events: 0 Jul 1 08:33:46.945095 kernel: ... event mask: 000000000000000f Jul 1 08:33:46.945104 kernel: signal: max sigframe size: 1440 Jul 1 08:33:46.945113 kernel: rcu: Hierarchical SRCU implementation. Jul 1 08:33:46.945122 kernel: rcu: Max phase no-delay instances is 400. Jul 1 08:33:46.945131 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 1 08:33:46.945140 kernel: smp: Bringing up secondary CPUs ... Jul 1 08:33:46.945151 kernel: smpboot: x86: Booting SMP configuration: Jul 1 08:33:46.945160 kernel: .... node #0, CPUs: #1 Jul 1 08:33:46.945169 kernel: smp: Brought up 1 node, 2 CPUs Jul 1 08:33:46.945178 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 1 08:33:46.945187 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54508K init, 2460K bss, 227296K reserved, 0K cma-reserved) Jul 1 08:33:46.945196 kernel: devtmpfs: initialized Jul 1 08:33:46.945205 kernel: x86/mm: Memory block size: 128MB Jul 1 08:33:46.945215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 1 08:33:46.945224 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 1 08:33:46.945234 kernel: pinctrl core: initialized pinctrl subsystem Jul 1 08:33:46.945243 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 1 08:33:46.945266 kernel: audit: initializing netlink subsys (disabled) Jul 1 08:33:46.945275 kernel: audit: type=2000 audit(1751358823.461:1): state=initialized audit_enabled=0 res=1 Jul 1 08:33:46.945284 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 1 08:33:46.945293 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 1 08:33:46.945302 kernel: cpuidle: using governor menu Jul 1 08:33:46.945326 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 1 08:33:46.945335 kernel: dca service started, version 1.12.1 Jul 1 08:33:46.945346 kernel: PCI: Using configuration type 1 for base access Jul 1 08:33:46.945356 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 1 08:33:46.945365 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 1 08:33:46.945374 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 1 08:33:46.945383 kernel: ACPI: Added _OSI(Module Device) Jul 1 08:33:46.945392 kernel: ACPI: Added _OSI(Processor Device) Jul 1 08:33:46.945401 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 1 08:33:46.945410 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 1 08:33:46.945419 kernel: ACPI: Interpreter enabled Jul 1 08:33:46.945430 kernel: ACPI: PM: (supports S0 S3 S5) Jul 1 08:33:46.945439 kernel: ACPI: Using IOAPIC for interrupt routing Jul 1 08:33:46.945448 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 1 08:33:46.945457 kernel: PCI: Using E820 reservations for host bridge windows Jul 1 08:33:46.945466 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 1 08:33:46.945475 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 1 08:33:46.945613 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 1 08:33:46.945711 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 1 08:33:46.945800 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 1 08:33:46.945813 kernel: acpiphp: Slot [3] registered Jul 1 08:33:46.945823 kernel: acpiphp: Slot [4] registered Jul 1 08:33:46.945832 kernel: acpiphp: Slot [5] registered Jul 1 08:33:46.945841 kernel: acpiphp: Slot [6] registered Jul 1 08:33:46.945850 kernel: acpiphp: Slot [7] registered Jul 1 08:33:46.945858 kernel: acpiphp: Slot [8] registered Jul 1 08:33:46.945868 kernel: acpiphp: Slot [9] registered Jul 1 08:33:46.945876 kernel: acpiphp: Slot [10] registered Jul 1 08:33:46.945903 kernel: acpiphp: Slot [11] registered Jul 1 08:33:46.945912 kernel: acpiphp: Slot [12] registered Jul 1 08:33:46.945921 kernel: acpiphp: Slot [13] registered Jul 1 08:33:46.945930 kernel: acpiphp: Slot [14] registered Jul 1 08:33:46.945939 kernel: acpiphp: Slot [15] registered Jul 1 08:33:46.945948 kernel: acpiphp: Slot [16] registered Jul 1 08:33:46.945957 kernel: acpiphp: Slot [17] registered Jul 1 08:33:46.945966 kernel: acpiphp: Slot [18] registered Jul 1 08:33:46.945974 kernel: acpiphp: Slot [19] registered Jul 1 08:33:46.945985 kernel: acpiphp: Slot [20] registered Jul 1 08:33:46.945994 kernel: acpiphp: Slot [21] registered Jul 1 08:33:46.946003 kernel: acpiphp: Slot [22] registered Jul 1 08:33:46.946012 kernel: acpiphp: Slot [23] registered Jul 1 08:33:46.946021 kernel: acpiphp: Slot [24] registered Jul 1 08:33:46.946030 kernel: acpiphp: Slot [25] registered Jul 1 08:33:46.946039 kernel: acpiphp: Slot [26] registered Jul 1 08:33:46.946048 kernel: acpiphp: Slot [27] registered Jul 1 08:33:46.946057 kernel: acpiphp: Slot [28] registered Jul 1 08:33:46.946065 kernel: acpiphp: Slot [29] registered Jul 1 08:33:46.946076 kernel: acpiphp: Slot [30] registered Jul 1 08:33:46.946085 kernel: acpiphp: Slot [31] registered Jul 1 08:33:46.946094 kernel: PCI host bridge to bus 0000:00 Jul 1 08:33:46.946186 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 1 08:33:46.946287 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 1 08:33:46.946367 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 1 08:33:46.946443 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 1 08:33:46.946523 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 1 08:33:46.946598 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 1 08:33:46.946700 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 1 08:33:46.946800 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 1 08:33:46.946900 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 1 08:33:46.946988 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jul 1 08:33:46.947078 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 1 08:33:46.947162 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 1 08:33:46.947263 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 1 08:33:46.947356 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 1 08:33:46.947450 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 1 08:33:46.947537 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 1 08:33:46.947622 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 1 08:33:46.947722 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 1 08:33:46.947810 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 1 08:33:46.947898 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jul 1 08:33:46.947986 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jul 1 08:33:46.948072 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jul 1 08:33:46.948157 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 1 08:33:46.948270 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 1 08:33:46.948369 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jul 1 08:33:46.948455 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jul 1 08:33:46.948541 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jul 1 08:33:46.948626 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jul 1 08:33:46.948724 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 1 08:33:46.948810 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 1 08:33:46.948896 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jul 1 08:33:46.948985 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 1 08:33:46.949081 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 1 08:33:46.949168 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jul 1 08:33:46.949280 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 1 08:33:46.949380 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 1 08:33:46.949471 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jul 1 08:33:46.949562 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jul 1 08:33:46.949648 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jul 1 08:33:46.949661 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 1 08:33:46.949671 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 1 08:33:46.949680 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 1 08:33:46.949689 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 1 08:33:46.949698 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 1 08:33:46.949707 kernel: iommu: Default domain type: Translated Jul 1 08:33:46.949717 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 1 08:33:46.949729 kernel: PCI: Using ACPI for IRQ routing Jul 1 08:33:46.949738 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 1 08:33:46.949747 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 1 08:33:46.949757 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 1 08:33:46.949842 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 1 08:33:46.949945 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 1 08:33:46.950034 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 1 08:33:46.950048 kernel: vgaarb: loaded Jul 1 08:33:46.950057 kernel: clocksource: Switched to clocksource kvm-clock Jul 1 08:33:46.950070 kernel: VFS: Disk quotas dquot_6.6.0 Jul 1 08:33:46.950079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 1 08:33:46.950088 kernel: pnp: PnP ACPI init Jul 1 08:33:46.950185 kernel: pnp 00:03: [dma 2] Jul 1 08:33:46.950199 kernel: pnp: PnP ACPI: found 5 devices Jul 1 08:33:46.950209 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 1 08:33:46.950218 kernel: NET: Registered PF_INET protocol family Jul 1 08:33:46.950227 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 1 08:33:46.950239 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 1 08:33:46.952289 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 1 08:33:46.952307 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 1 08:33:46.952317 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 1 08:33:46.952326 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 1 08:33:46.952336 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 08:33:46.952345 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 08:33:46.952355 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 1 08:33:46.952364 kernel: NET: Registered PF_XDP protocol family Jul 1 08:33:46.952464 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 1 08:33:46.952542 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 1 08:33:46.952617 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 1 08:33:46.952692 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 1 08:33:46.952766 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 1 08:33:46.952857 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 1 08:33:46.952945 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 1 08:33:46.952959 kernel: PCI: CLS 0 bytes, default 64 Jul 1 08:33:46.952972 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 1 08:33:46.952981 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 1 08:33:46.952990 kernel: Initialise system trusted keyrings Jul 1 08:33:46.953000 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 1 08:33:46.953009 kernel: Key type asymmetric registered Jul 1 08:33:46.953018 kernel: Asymmetric key parser 'x509' registered Jul 1 08:33:46.953027 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 1 08:33:46.953037 kernel: io scheduler mq-deadline registered Jul 1 08:33:46.953047 kernel: io scheduler kyber registered Jul 1 08:33:46.953057 kernel: io scheduler bfq registered Jul 1 08:33:46.953066 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 1 08:33:46.953076 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 1 08:33:46.953085 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 1 08:33:46.953094 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 1 08:33:46.953104 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 1 08:33:46.953113 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 1 08:33:46.953122 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 1 08:33:46.953131 kernel: random: crng init done Jul 1 08:33:46.953142 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 1 08:33:46.953151 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 1 08:33:46.953161 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 1 08:33:46.953287 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 1 08:33:46.953303 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 1 08:33:46.953385 kernel: rtc_cmos 00:04: registered as rtc0 Jul 1 08:33:46.953463 kernel: rtc_cmos 00:04: setting system clock to 2025-07-01T08:33:46 UTC (1751358826) Jul 1 08:33:46.953541 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 1 08:33:46.953559 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 1 08:33:46.953568 kernel: NET: Registered PF_INET6 protocol family Jul 1 08:33:46.953577 kernel: Segment Routing with IPv6 Jul 1 08:33:46.953586 kernel: In-situ OAM (IOAM) with IPv6 Jul 1 08:33:46.953596 kernel: NET: Registered PF_PACKET protocol family Jul 1 08:33:46.953605 kernel: Key type dns_resolver registered Jul 1 08:33:46.953614 kernel: IPI shorthand broadcast: enabled Jul 1 08:33:46.953623 kernel: sched_clock: Marking stable (3773007758, 196573006)->(4022395321, -52814557) Jul 1 08:33:46.953634 kernel: registered taskstats version 1 Jul 1 08:33:46.953643 kernel: Loading compiled-in X.509 certificates Jul 1 08:33:46.953653 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: bdab85da21e6e40e781d68d3bf17f0a40ee7357c' Jul 1 08:33:46.953662 kernel: Demotion targets for Node 0: null Jul 1 08:33:46.953671 kernel: Key type .fscrypt registered Jul 1 08:33:46.953680 kernel: Key type fscrypt-provisioning registered Jul 1 08:33:46.953690 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 1 08:33:46.953699 kernel: ima: Allocated hash algorithm: sha1 Jul 1 08:33:46.953708 kernel: ima: No architecture policies found Jul 1 08:33:46.953719 kernel: clk: Disabling unused clocks Jul 1 08:33:46.953728 kernel: Warning: unable to open an initial console. Jul 1 08:33:46.953737 kernel: Freeing unused kernel image (initmem) memory: 54508K Jul 1 08:33:46.953746 kernel: Write protecting the kernel read-only data: 24576k Jul 1 08:33:46.953756 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 1 08:33:46.953765 kernel: Run /init as init process Jul 1 08:33:46.953774 kernel: with arguments: Jul 1 08:33:46.953784 kernel: /init Jul 1 08:33:46.953792 kernel: with environment: Jul 1 08:33:46.953803 kernel: HOME=/ Jul 1 08:33:46.953812 kernel: TERM=linux Jul 1 08:33:46.953821 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 1 08:33:46.953832 systemd[1]: Successfully made /usr/ read-only. Jul 1 08:33:46.953844 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:33:46.953855 systemd[1]: Detected virtualization kvm. Jul 1 08:33:46.953865 systemd[1]: Detected architecture x86-64. Jul 1 08:33:46.953882 systemd[1]: Running in initrd. Jul 1 08:33:46.953909 systemd[1]: No hostname configured, using default hostname. Jul 1 08:33:46.953920 systemd[1]: Hostname set to . Jul 1 08:33:46.953930 systemd[1]: Initializing machine ID from VM UUID. Jul 1 08:33:46.953940 systemd[1]: Queued start job for default target initrd.target. Jul 1 08:33:46.953950 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:33:46.953962 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:33:46.953973 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 1 08:33:46.953983 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:33:46.953994 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 1 08:33:46.954005 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 1 08:33:46.954016 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 1 08:33:46.954027 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 1 08:33:46.954038 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:33:46.954048 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:33:46.954058 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:33:46.954068 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:33:46.954078 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:33:46.954088 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:33:46.954098 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:33:46.954109 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:33:46.954120 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 1 08:33:46.954130 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 1 08:33:46.954141 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:33:46.954151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:33:46.954161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:33:46.954171 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:33:46.954181 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 1 08:33:46.954191 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:33:46.954202 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 1 08:33:46.954214 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 1 08:33:46.954224 systemd[1]: Starting systemd-fsck-usr.service... Jul 1 08:33:46.954236 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:33:46.954246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:33:46.954289 systemd-journald[214]: Collecting audit messages is disabled. Jul 1 08:33:46.954320 systemd-journald[214]: Journal started Jul 1 08:33:46.954345 systemd-journald[214]: Runtime Journal (/run/log/journal/8469e05e177f467abe8e9af5117cf696) is 8M, max 78.5M, 70.5M free. Jul 1 08:33:46.966866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:33:46.968735 systemd-modules-load[216]: Inserted module 'overlay' Jul 1 08:33:46.976294 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:33:46.990013 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 1 08:33:46.991873 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:33:47.004470 systemd[1]: Finished systemd-fsck-usr.service. Jul 1 08:33:47.008267 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 1 08:33:47.013168 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 1 08:33:47.013697 kernel: Bridge firewalling registered Jul 1 08:33:47.014586 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 08:33:47.024363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:33:47.026883 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:33:47.031350 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:33:47.051639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:33:47.054716 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 1 08:33:47.091208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:33:47.091891 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 08:33:47.097462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:33:47.100358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 08:33:47.104372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:33:47.106396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:33:47.125440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:33:47.128218 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:33:47.129972 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 1 08:33:47.154208 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:33:47.161217 systemd-resolved[240]: Positive Trust Anchors: Jul 1 08:33:47.161234 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:33:47.161301 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:33:47.165395 systemd-resolved[240]: Defaulting to hostname 'linux'. Jul 1 08:33:47.166423 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:33:47.167277 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:33:47.255332 kernel: SCSI subsystem initialized Jul 1 08:33:47.265356 kernel: Loading iSCSI transport class v2.0-870. Jul 1 08:33:47.278423 kernel: iscsi: registered transport (tcp) Jul 1 08:33:47.301477 kernel: iscsi: registered transport (qla4xxx) Jul 1 08:33:47.301557 kernel: QLogic iSCSI HBA Driver Jul 1 08:33:47.330382 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:33:47.368879 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:33:47.377452 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:33:47.470764 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 1 08:33:47.475492 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 1 08:33:47.569390 kernel: raid6: sse2x4 gen() 7460 MB/s Jul 1 08:33:47.587352 kernel: raid6: sse2x2 gen() 14436 MB/s Jul 1 08:33:47.605397 kernel: raid6: sse2x1 gen() 9629 MB/s Jul 1 08:33:47.605474 kernel: raid6: using algorithm sse2x2 gen() 14436 MB/s Jul 1 08:33:47.625619 kernel: raid6: .... xor() 5827 MB/s, rmw enabled Jul 1 08:33:47.625698 kernel: raid6: using ssse3x2 recovery algorithm Jul 1 08:33:47.653994 kernel: xor: measuring software checksum speed Jul 1 08:33:47.654059 kernel: prefetch64-sse : 17143 MB/sec Jul 1 08:33:47.654504 kernel: generic_sse : 15617 MB/sec Jul 1 08:33:47.656792 kernel: xor: using function: prefetch64-sse (17143 MB/sec) Jul 1 08:33:47.864790 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 1 08:33:47.873682 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:33:47.877973 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:33:47.933844 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jul 1 08:33:47.947396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:33:47.954401 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 1 08:33:47.979609 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jul 1 08:33:48.014985 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:33:48.019704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:33:48.071396 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:33:48.080470 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 1 08:33:48.160301 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 1 08:33:48.178296 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 1 08:33:48.181081 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:33:48.181216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:33:48.183737 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:33:48.185111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:33:48.186861 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:33:48.200543 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 1 08:33:48.200591 kernel: GPT:17805311 != 20971519 Jul 1 08:33:48.200603 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 1 08:33:48.202927 kernel: GPT:17805311 != 20971519 Jul 1 08:33:48.202950 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 1 08:33:48.204557 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:33:48.218315 kernel: libata version 3.00 loaded. Jul 1 08:33:48.221284 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 1 08:33:48.222290 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 1 08:33:48.224401 kernel: scsi host0: ata_piix Jul 1 08:33:48.226279 kernel: scsi host1: ata_piix Jul 1 08:33:48.230564 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jul 1 08:33:48.230613 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jul 1 08:33:48.295230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 1 08:33:48.308642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:33:48.319871 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 1 08:33:48.330540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 1 08:33:48.338988 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 1 08:33:48.339581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 1 08:33:48.343235 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 1 08:33:48.371214 disk-uuid[564]: Primary Header is updated. Jul 1 08:33:48.371214 disk-uuid[564]: Secondary Entries is updated. Jul 1 08:33:48.371214 disk-uuid[564]: Secondary Header is updated. Jul 1 08:33:48.380325 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:33:48.505773 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 1 08:33:48.532129 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:33:48.532797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:33:48.534562 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:33:48.539367 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 1 08:33:48.570047 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:33:49.401329 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:33:49.401973 disk-uuid[565]: The operation has completed successfully. Jul 1 08:33:49.483774 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 1 08:33:49.484561 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 1 08:33:49.536161 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 1 08:33:49.555886 sh[589]: Success Jul 1 08:33:49.581763 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 1 08:33:49.581807 kernel: device-mapper: uevent: version 1.0.3 Jul 1 08:33:49.583062 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 1 08:33:49.600330 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jul 1 08:33:49.692635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 1 08:33:49.696389 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 1 08:33:49.715154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 1 08:33:49.728817 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 1 08:33:49.728890 kernel: BTRFS: device fsid aeab36fb-d8a9-440c-a872-a8cce0218739 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (601) Jul 1 08:33:49.734542 kernel: BTRFS info (device dm-0): first mount of filesystem aeab36fb-d8a9-440c-a872-a8cce0218739 Jul 1 08:33:49.734608 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:33:49.737239 kernel: BTRFS info (device dm-0): using free-space-tree Jul 1 08:33:49.749805 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 1 08:33:49.752227 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:33:49.754420 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 1 08:33:49.756407 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 1 08:33:49.759224 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 1 08:33:49.802292 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (632) Jul 1 08:33:49.811418 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:33:49.811455 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:33:49.816153 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:33:49.831296 kernel: BTRFS info (device vda6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:33:49.832937 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 1 08:33:49.836446 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 1 08:33:49.888750 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:33:49.891164 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:33:49.927079 systemd-networkd[772]: lo: Link UP Jul 1 08:33:49.927091 systemd-networkd[772]: lo: Gained carrier Jul 1 08:33:49.928401 systemd-networkd[772]: Enumeration completed Jul 1 08:33:49.928489 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:33:49.929311 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:33:49.929315 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:33:49.930496 systemd-networkd[772]: eth0: Link UP Jul 1 08:33:49.930501 systemd-networkd[772]: eth0: Gained carrier Jul 1 08:33:49.930510 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:33:49.934123 systemd[1]: Reached target network.target - Network. Jul 1 08:33:49.950599 systemd-networkd[772]: eth0: DHCPv4 address 172.24.4.49/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 1 08:33:50.046686 ignition[687]: Ignition 2.21.0 Jul 1 08:33:50.046719 ignition[687]: Stage: fetch-offline Jul 1 08:33:50.047344 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:33:50.047376 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 1 08:33:50.047701 ignition[687]: parsed url from cmdline: "" Jul 1 08:33:50.047882 ignition[687]: no config URL provided Jul 1 08:33:50.047907 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 08:33:50.047924 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jul 1 08:33:50.047933 ignition[687]: failed to fetch config: resource requires networking Jul 1 08:33:50.048341 ignition[687]: Ignition finished successfully Jul 1 08:33:50.053129 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:33:50.056991 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 1 08:33:50.084866 ignition[784]: Ignition 2.21.0 Jul 1 08:33:50.085661 ignition[784]: Stage: fetch Jul 1 08:33:50.085816 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:33:50.085827 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 1 08:33:50.085932 ignition[784]: parsed url from cmdline: "" Jul 1 08:33:50.085936 ignition[784]: no config URL provided Jul 1 08:33:50.085943 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 08:33:50.085951 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jul 1 08:33:50.086070 ignition[784]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 1 08:33:50.086168 ignition[784]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 1 08:33:50.086242 ignition[784]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 1 08:33:50.276963 ignition[784]: GET result: OK Jul 1 08:33:50.277158 ignition[784]: parsing config with SHA512: 8666290acdaa82c68d9a80b70af26962e672d578b9bed15d31db01b314230f3c233dda4e4e771b9cd8f8b982c9cb455e45c90e74317e1bb197fbc6d18bfdccd4 Jul 1 08:33:50.293113 unknown[784]: fetched base config from "system" Jul 1 08:33:50.293138 unknown[784]: fetched base config from "system" Jul 1 08:33:50.294075 ignition[784]: fetch: fetch complete Jul 1 08:33:50.293152 unknown[784]: fetched user config from "openstack" Jul 1 08:33:50.294088 ignition[784]: fetch: fetch passed Jul 1 08:33:50.299152 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 1 08:33:50.294178 ignition[784]: Ignition finished successfully Jul 1 08:33:50.304962 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 1 08:33:50.375024 ignition[790]: Ignition 2.21.0 Jul 1 08:33:50.375048 ignition[790]: Stage: kargs Jul 1 08:33:50.376965 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:33:50.376991 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 1 08:33:50.379186 ignition[790]: kargs: kargs passed Jul 1 08:33:50.382111 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 1 08:33:50.379316 ignition[790]: Ignition finished successfully Jul 1 08:33:50.388544 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 1 08:33:50.449990 ignition[796]: Ignition 2.21.0 Jul 1 08:33:50.450023 ignition[796]: Stage: disks Jul 1 08:33:50.450985 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:33:50.451014 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 1 08:33:50.458034 ignition[796]: disks: disks passed Jul 1 08:33:50.458174 ignition[796]: Ignition finished successfully Jul 1 08:33:50.461989 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 1 08:33:50.463442 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 1 08:33:50.465952 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 1 08:33:50.469199 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:33:50.472548 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:33:50.475107 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:33:50.480307 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 1 08:33:50.543658 systemd-fsck[805]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 1 08:33:50.556036 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 1 08:33:50.561483 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 1 08:33:50.754285 kernel: EXT4-fs (vda9): mounted filesystem 18421243-07cc-41b2-b496-d6a2cef84352 r/w with ordered data mode. Quota mode: none. Jul 1 08:33:50.754832 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 1 08:33:50.757869 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 1 08:33:50.763699 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:33:50.768012 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 1 08:33:50.782693 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 1 08:33:50.788929 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 1 08:33:50.792068 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 1 08:33:50.792138 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:33:50.802501 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 1 08:33:50.806804 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 1 08:33:50.815303 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (813) Jul 1 08:33:50.820797 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:33:50.820850 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:33:50.823553 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:33:50.840855 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:33:50.928352 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:33:50.961224 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jul 1 08:33:50.969110 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jul 1 08:33:50.978363 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jul 1 08:33:50.984972 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jul 1 08:33:51.115339 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 1 08:33:51.121061 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 1 08:33:51.125496 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 1 08:33:51.140995 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 1 08:33:51.146300 kernel: BTRFS info (device vda6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:33:51.180751 ignition[930]: INFO : Ignition 2.21.0 Jul 1 08:33:51.181560 ignition[930]: INFO : Stage: mount Jul 1 08:33:51.182029 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:33:51.182029 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 1 08:33:51.184621 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 1 08:33:51.185162 ignition[930]: INFO : mount: mount passed Jul 1 08:33:51.187181 ignition[930]: INFO : Ignition finished successfully Jul 1 08:33:51.187977 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 1 08:33:51.624607 systemd-networkd[772]: eth0: Gained IPv6LL Jul 1 08:33:51.967342 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:33:53.989285 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:33:58.002333 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:33:58.013778 coreos-metadata[815]: Jul 01 08:33:58.013 WARN failed to locate config-drive, using the metadata service API instead Jul 1 08:33:58.055607 coreos-metadata[815]: Jul 01 08:33:58.055 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 1 08:33:58.070220 coreos-metadata[815]: Jul 01 08:33:58.070 INFO Fetch successful Jul 1 08:33:58.071614 coreos-metadata[815]: Jul 01 08:33:58.070 INFO wrote hostname ci-9999-9-9-s-39d8ad6622.novalocal to /sysroot/etc/hostname Jul 1 08:33:58.074552 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 1 08:33:58.074783 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 1 08:33:58.082454 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 1 08:33:58.112713 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:33:58.144325 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (946) Jul 1 08:33:58.152161 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:33:58.152227 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:33:58.156523 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:33:58.170809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:33:58.220514 ignition[964]: INFO : Ignition 2.21.0 Jul 1 08:33:58.220514 ignition[964]: INFO : Stage: files Jul 1 08:33:58.223550 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:33:58.223550 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 1 08:33:58.223550 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jul 1 08:33:58.229072 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 1 08:33:58.229072 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 1 08:33:58.233820 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 1 08:33:58.233820 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 1 08:33:58.233820 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 1 08:33:58.231551 unknown[964]: wrote ssh authorized keys file for user: core Jul 1 08:33:58.250073 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 1 08:33:58.250073 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 1 08:33:58.341158 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 1 08:33:58.724007 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 1 08:33:58.724007 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 1 08:33:58.729012 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 1 08:33:59.412535 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 1 08:33:59.929391 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 1 08:33:59.929391 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 1 08:33:59.934665 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 1 08:33:59.934665 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:33:59.934665 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:33:59.934665 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:33:59.934665 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:33:59.934665 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:33:59.934665 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:33:59.949410 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:33:59.949410 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:33:59.949410 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:33:59.949410 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:33:59.949410 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:33:59.949410 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 1 08:34:00.603467 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 1 08:34:03.190841 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:34:03.192419 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 1 08:34:03.195018 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:34:03.205107 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:34:03.205107 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 1 08:34:03.205107 ignition[964]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 1 08:34:03.205107 ignition[964]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 1 08:34:03.216012 ignition[964]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:34:03.216012 ignition[964]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:34:03.216012 ignition[964]: INFO : files: files passed Jul 1 08:34:03.216012 ignition[964]: INFO : Ignition finished successfully Jul 1 08:34:03.207765 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 1 08:34:03.212387 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 1 08:34:03.215819 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 1 08:34:03.228166 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 1 08:34:03.228290 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 1 08:34:03.242771 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:34:03.242771 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:34:03.244990 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:34:03.246207 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:34:03.249400 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 1 08:34:03.253428 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 1 08:34:03.310923 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 1 08:34:03.311181 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 1 08:34:03.314038 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 1 08:34:03.316391 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 1 08:34:03.319510 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 1 08:34:03.321442 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 1 08:34:03.391879 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:34:03.401170 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 1 08:34:03.461163 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:34:03.464566 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:34:03.466302 systemd[1]: Stopped target timers.target - Timer Units. Jul 1 08:34:03.469220 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 1 08:34:03.469554 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:34:03.472768 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 1 08:34:03.474748 systemd[1]: Stopped target basic.target - Basic System. Jul 1 08:34:03.477670 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 1 08:34:03.480386 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:34:03.482985 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 1 08:34:03.486298 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:34:03.489372 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 1 08:34:03.492226 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:34:03.495624 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 1 08:34:03.498492 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 1 08:34:03.501660 systemd[1]: Stopped target swap.target - Swaps. Jul 1 08:34:03.504413 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 1 08:34:03.504794 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:34:03.507785 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:34:03.509800 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:34:03.512380 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 1 08:34:03.512670 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:34:03.515542 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 1 08:34:03.515923 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 1 08:34:03.519693 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 1 08:34:03.520013 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:34:03.521757 systemd[1]: ignition-files.service: Deactivated successfully. Jul 1 08:34:03.522085 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 1 08:34:03.526695 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 1 08:34:03.530055 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 1 08:34:03.530520 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:34:03.536699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 1 08:34:03.542103 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 1 08:34:03.543585 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:34:03.548537 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 1 08:34:03.548810 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:34:03.561722 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 1 08:34:03.562427 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 1 08:34:03.587012 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 1 08:34:03.593934 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 1 08:34:03.594773 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 1 08:34:03.604462 ignition[1018]: INFO : Ignition 2.21.0 Jul 1 08:34:03.604462 ignition[1018]: INFO : Stage: umount Jul 1 08:34:03.605556 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:34:03.605556 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 1 08:34:03.605556 ignition[1018]: INFO : umount: umount passed Jul 1 08:34:03.605556 ignition[1018]: INFO : Ignition finished successfully Jul 1 08:34:03.607146 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 1 08:34:03.607233 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 1 08:34:03.608182 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 1 08:34:03.608247 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 1 08:34:03.610559 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 1 08:34:03.610644 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 1 08:34:03.611530 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 1 08:34:03.611574 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 1 08:34:03.612613 systemd[1]: Stopped target network.target - Network. Jul 1 08:34:03.613613 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 1 08:34:03.613663 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:34:03.614785 systemd[1]: Stopped target paths.target - Path Units. Jul 1 08:34:03.615827 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 1 08:34:03.616081 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:34:03.616928 systemd[1]: Stopped target slices.target - Slice Units. Jul 1 08:34:03.617896 systemd[1]: Stopped target sockets.target - Socket Units. Jul 1 08:34:03.618961 systemd[1]: iscsid.socket: Deactivated successfully. Jul 1 08:34:03.619001 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:34:03.620136 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 1 08:34:03.620170 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:34:03.621340 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 1 08:34:03.621390 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 1 08:34:03.622400 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 1 08:34:03.622444 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 1 08:34:03.623600 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 1 08:34:03.623646 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 1 08:34:03.629093 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 1 08:34:03.630232 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 1 08:34:03.636463 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 1 08:34:03.636582 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 1 08:34:03.640997 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 1 08:34:03.641240 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 1 08:34:03.641393 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 1 08:34:03.643410 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 1 08:34:03.643919 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 1 08:34:03.644900 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 1 08:34:03.644944 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:34:03.646893 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 1 08:34:03.648545 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 1 08:34:03.648591 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:34:03.649688 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 1 08:34:03.649729 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:34:03.652101 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 1 08:34:03.652146 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 1 08:34:03.653330 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 1 08:34:03.653377 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:34:03.654927 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:34:03.659325 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 1 08:34:03.659386 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:34:03.662732 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 1 08:34:03.663437 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:34:03.664443 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 1 08:34:03.664500 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 1 08:34:03.665031 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 1 08:34:03.665063 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:34:03.666448 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 1 08:34:03.666495 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:34:03.668851 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 1 08:34:03.668894 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 1 08:34:03.670126 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 08:34:03.670170 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:34:03.673349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 1 08:34:03.674084 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 1 08:34:03.674133 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:34:03.675565 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 1 08:34:03.675609 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:34:03.678505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:34:03.678568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:34:03.683720 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 1 08:34:03.683786 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 1 08:34:03.683826 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:34:03.684186 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 1 08:34:03.684339 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 1 08:34:03.688223 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 1 08:34:03.688320 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 1 08:34:03.689144 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 1 08:34:03.690883 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 1 08:34:03.710379 systemd[1]: Switching root. Jul 1 08:34:03.755903 systemd-journald[214]: Journal stopped Jul 1 08:34:05.926177 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jul 1 08:34:05.926268 kernel: SELinux: policy capability network_peer_controls=1 Jul 1 08:34:05.926290 kernel: SELinux: policy capability open_perms=1 Jul 1 08:34:05.926302 kernel: SELinux: policy capability extended_socket_class=1 Jul 1 08:34:05.926314 kernel: SELinux: policy capability always_check_network=0 Jul 1 08:34:05.926326 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 1 08:34:05.926339 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 1 08:34:05.926353 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 1 08:34:05.926365 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 1 08:34:05.926376 kernel: SELinux: policy capability userspace_initial_context=0 Jul 1 08:34:05.926387 kernel: audit: type=1403 audit(1751358844.593:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 1 08:34:05.926399 systemd[1]: Successfully loaded SELinux policy in 200.488ms. Jul 1 08:34:05.926423 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.699ms. Jul 1 08:34:05.926437 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:34:05.926450 systemd[1]: Detected virtualization kvm. Jul 1 08:34:05.926465 systemd[1]: Detected architecture x86-64. Jul 1 08:34:05.926477 systemd[1]: Detected first boot. Jul 1 08:34:05.926490 systemd[1]: Hostname set to . Jul 1 08:34:05.926502 systemd[1]: Initializing machine ID from VM UUID. Jul 1 08:34:05.926515 zram_generator::config[1061]: No configuration found. Jul 1 08:34:05.926528 kernel: Guest personality initialized and is inactive Jul 1 08:34:05.926539 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 1 08:34:05.926550 kernel: Initialized host personality Jul 1 08:34:05.926561 kernel: NET: Registered PF_VSOCK protocol family Jul 1 08:34:05.926574 systemd[1]: Populated /etc with preset unit settings. Jul 1 08:34:05.926588 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 1 08:34:05.926600 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 1 08:34:05.926612 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 1 08:34:05.926624 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 1 08:34:05.926637 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 1 08:34:05.926650 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 1 08:34:05.926669 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 1 08:34:05.926683 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 1 08:34:05.926696 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 1 08:34:05.926708 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 1 08:34:05.926721 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 1 08:34:05.926733 systemd[1]: Created slice user.slice - User and Session Slice. Jul 1 08:34:05.926746 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:34:05.926759 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:34:05.926771 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 1 08:34:05.926786 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 1 08:34:05.926799 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 1 08:34:05.926811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:34:05.926824 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 1 08:34:05.926837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:34:05.926850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:34:05.926862 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 1 08:34:05.926876 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 1 08:34:05.926889 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 1 08:34:05.926901 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 1 08:34:05.926914 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:34:05.926931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:34:05.926943 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:34:05.926956 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:34:05.926968 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 1 08:34:05.926980 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 1 08:34:05.926995 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 1 08:34:05.927007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:34:05.927019 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:34:05.927032 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:34:05.927044 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 1 08:34:05.927056 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 1 08:34:05.927069 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 1 08:34:05.927081 systemd[1]: Mounting media.mount - External Media Directory... Jul 1 08:34:05.927094 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:34:05.927108 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 1 08:34:05.927121 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 1 08:34:05.927133 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 1 08:34:05.927146 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 1 08:34:05.927159 systemd[1]: Reached target machines.target - Containers. Jul 1 08:34:05.927171 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 1 08:34:05.927184 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:34:05.927196 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:34:05.927208 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 1 08:34:05.927223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:34:05.927235 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:34:05.927264 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:34:05.927280 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 1 08:34:05.927293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:34:05.927305 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 1 08:34:05.927318 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 1 08:34:05.927330 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 1 08:34:05.927348 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 1 08:34:05.927360 systemd[1]: Stopped systemd-fsck-usr.service. Jul 1 08:34:05.927373 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:34:05.927386 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:34:05.927398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:34:05.927410 kernel: loop: module loaded Jul 1 08:34:05.927422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:34:05.927435 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 1 08:34:05.927447 kernel: fuse: init (API version 7.41) Jul 1 08:34:05.927461 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 1 08:34:05.927474 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:34:05.927486 systemd[1]: verity-setup.service: Deactivated successfully. Jul 1 08:34:05.927499 systemd[1]: Stopped verity-setup.service. Jul 1 08:34:05.927511 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:34:05.927526 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 1 08:34:05.927538 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 1 08:34:05.927550 systemd[1]: Mounted media.mount - External Media Directory. Jul 1 08:34:05.927563 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 1 08:34:05.927575 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 1 08:34:05.927590 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 1 08:34:05.927602 kernel: ACPI: bus type drm_connector registered Jul 1 08:34:05.927614 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:34:05.927627 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 1 08:34:05.927640 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 1 08:34:05.927654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:34:05.927684 systemd-journald[1151]: Collecting audit messages is disabled. Jul 1 08:34:05.927713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:34:05.927728 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:34:05.927740 systemd-journald[1151]: Journal started Jul 1 08:34:05.927766 systemd-journald[1151]: Runtime Journal (/run/log/journal/8469e05e177f467abe8e9af5117cf696) is 8M, max 78.5M, 70.5M free. Jul 1 08:34:05.557596 systemd[1]: Queued start job for default target multi-user.target. Jul 1 08:34:05.583382 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 1 08:34:05.583839 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 1 08:34:05.932277 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:34:05.937274 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:34:05.937114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:34:05.937561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:34:05.938547 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 1 08:34:05.938728 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 1 08:34:05.939787 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:34:05.939945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:34:05.941045 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:34:05.941906 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:34:05.942736 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 1 08:34:05.943631 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 1 08:34:05.956754 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 1 08:34:05.959198 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:34:05.963344 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 1 08:34:05.966398 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 1 08:34:05.967324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 1 08:34:05.967355 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:34:05.969309 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 1 08:34:05.972414 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 1 08:34:05.973711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:34:05.978413 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 1 08:34:05.981416 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 1 08:34:05.982049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:34:05.986835 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 1 08:34:05.987962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:34:05.990391 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:34:05.992833 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 1 08:34:05.998333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 1 08:34:06.001150 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 1 08:34:06.002413 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 1 08:34:06.017856 systemd-journald[1151]: Time spent on flushing to /var/log/journal/8469e05e177f467abe8e9af5117cf696 is 40.084ms for 975 entries. Jul 1 08:34:06.017856 systemd-journald[1151]: System Journal (/var/log/journal/8469e05e177f467abe8e9af5117cf696) is 8M, max 584.8M, 576.8M free. Jul 1 08:34:06.117807 systemd-journald[1151]: Received client request to flush runtime journal. Jul 1 08:34:06.117890 kernel: loop0: detected capacity change from 0 to 8 Jul 1 08:34:06.117914 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 1 08:34:06.117934 kernel: loop1: detected capacity change from 0 to 146336 Jul 1 08:34:06.040479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:34:06.045134 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 1 08:34:06.046018 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 1 08:34:06.047636 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 1 08:34:06.079505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:34:06.119620 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 1 08:34:06.141458 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 1 08:34:06.172910 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 1 08:34:06.182493 kernel: loop2: detected capacity change from 0 to 114000 Jul 1 08:34:06.178385 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:34:06.224901 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 1 08:34:06.224919 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 1 08:34:06.233283 kernel: loop3: detected capacity change from 0 to 229808 Jul 1 08:34:06.232624 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:34:06.286409 kernel: loop4: detected capacity change from 0 to 8 Jul 1 08:34:06.290297 kernel: loop5: detected capacity change from 0 to 146336 Jul 1 08:34:06.365299 kernel: loop6: detected capacity change from 0 to 114000 Jul 1 08:34:06.433970 kernel: loop7: detected capacity change from 0 to 229808 Jul 1 08:34:06.492740 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 1 08:34:06.493687 (sd-merge)[1223]: Merged extensions into '/usr'. Jul 1 08:34:06.508844 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Jul 1 08:34:06.509032 systemd[1]: Reloading... Jul 1 08:34:06.703378 zram_generator::config[1285]: No configuration found. Jul 1 08:34:06.817721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:34:06.974346 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 1 08:34:06.974743 systemd[1]: Reloading finished in 465 ms. Jul 1 08:34:06.993725 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 1 08:34:07.004386 systemd[1]: Starting ensure-sysext.service... Jul 1 08:34:07.007475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:34:07.016906 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 1 08:34:07.021448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:34:07.043358 systemd[1]: Reload requested from client PID 1304 ('systemctl') (unit ensure-sysext.service)... Jul 1 08:34:07.043377 systemd[1]: Reloading... Jul 1 08:34:07.053126 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 1 08:34:07.053161 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 1 08:34:07.055581 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 1 08:34:07.055870 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 1 08:34:07.056627 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 1 08:34:07.056897 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jul 1 08:34:07.056957 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jul 1 08:34:07.063845 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:34:07.063856 systemd-tmpfiles[1305]: Skipping /boot Jul 1 08:34:07.074046 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:34:07.074057 systemd-tmpfiles[1305]: Skipping /boot Jul 1 08:34:07.091852 systemd-udevd[1307]: Using default interface naming scheme 'v255'. Jul 1 08:34:07.143287 zram_generator::config[1335]: No configuration found. Jul 1 08:34:07.271221 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 1 08:34:07.396987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:34:07.509287 kernel: mousedev: PS/2 mouse device common for all mice Jul 1 08:34:07.515379 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 1 08:34:07.525287 kernel: ACPI: button: Power Button [PWRF] Jul 1 08:34:07.541642 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 1 08:34:07.542663 systemd[1]: Reloading finished in 498 ms. Jul 1 08:34:07.610473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:34:07.612879 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 1 08:34:07.613834 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:34:07.633068 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 1 08:34:07.633365 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 1 08:34:07.661467 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:34:07.664589 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 1 08:34:07.666899 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 1 08:34:07.673554 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:34:07.677640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:34:07.683336 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 1 08:34:07.692517 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:34:07.692720 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:34:07.695291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:34:07.699352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:34:07.703830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:34:07.704810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:34:07.704942 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:34:07.705066 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:34:07.712567 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 1 08:34:07.714567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:34:07.714763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:34:07.714935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:34:07.715039 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:34:07.715149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:34:07.719939 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:34:07.720194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:34:07.721542 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:34:07.729829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:34:07.730005 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:34:07.730188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:34:07.738331 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 1 08:34:07.740171 systemd[1]: Finished ensure-sysext.service. Jul 1 08:34:07.742742 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 1 08:34:07.764613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 1 08:34:07.767706 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 1 08:34:07.794417 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 1 08:34:07.794488 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 1 08:34:07.798608 kernel: Console: switching to colour dummy device 80x25 Jul 1 08:34:07.800632 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 1 08:34:07.800669 kernel: [drm] features: -context_init Jul 1 08:34:07.800743 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:34:07.800961 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:34:07.801499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:34:07.801705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:34:07.801908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:34:07.804784 kernel: [drm] number of scanouts: 1 Jul 1 08:34:07.804824 kernel: [drm] number of cap sets: 0 Jul 1 08:34:07.804818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:34:07.809786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:34:07.811290 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 1 08:34:07.814236 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:34:07.815302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:34:07.815939 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:34:07.833326 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 1 08:34:07.837979 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 1 08:34:07.840464 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 1 08:34:07.845620 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 1 08:34:07.845888 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 1 08:34:07.859715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:34:07.869325 augenrules[1487]: No rules Jul 1 08:34:07.870703 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:34:07.872050 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:34:07.891320 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 1 08:34:07.936781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:34:07.939311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:34:07.941048 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:34:07.947219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:34:07.947435 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 1 08:34:08.056411 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 1 08:34:08.056545 systemd[1]: Reached target time-set.target - System Time Set. Jul 1 08:34:08.086169 systemd-networkd[1438]: lo: Link UP Jul 1 08:34:08.086180 systemd-networkd[1438]: lo: Gained carrier Jul 1 08:34:08.087551 systemd-networkd[1438]: Enumeration completed Jul 1 08:34:08.087637 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:34:08.089442 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 1 08:34:08.091420 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 1 08:34:08.092360 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:34:08.092365 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:34:08.092984 systemd-networkd[1438]: eth0: Link UP Jul 1 08:34:08.093126 systemd-networkd[1438]: eth0: Gained carrier Jul 1 08:34:08.093142 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:34:08.108353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:34:08.109525 systemd-networkd[1438]: eth0: DHCPv4 address 172.24.4.49/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 1 08:34:08.110597 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Jul 1 08:34:08.112853 systemd-resolved[1441]: Positive Trust Anchors: Jul 1 08:34:08.112873 systemd-resolved[1441]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:34:08.112915 systemd-resolved[1441]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:34:08.119609 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 1 08:34:08.120036 systemd-resolved[1441]: Using system hostname 'ci-9999-9-9-s-39d8ad6622.novalocal'. Jul 1 08:34:08.121432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:34:08.121651 systemd[1]: Reached target network.target - Network. Jul 1 08:34:08.121722 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:34:08.121932 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:34:08.122088 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 1 08:34:08.122177 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 1 08:34:08.122326 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 1 08:34:08.122526 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 1 08:34:08.122910 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 1 08:34:08.122985 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 1 08:34:08.123047 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 1 08:34:08.123075 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:34:08.123136 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:34:08.124661 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 1 08:34:08.125934 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 1 08:34:08.128336 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 1 08:34:08.128533 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 1 08:34:08.128615 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 1 08:34:08.131846 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 1 08:34:08.132160 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 1 08:34:08.132825 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 1 08:34:08.133569 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:34:08.133644 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:34:08.133755 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:34:08.133785 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:34:08.134680 systemd[1]: Starting containerd.service - containerd container runtime... Jul 1 08:34:08.137360 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 1 08:34:08.138366 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 1 08:34:08.140487 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 1 08:34:08.144856 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 1 08:34:08.149480 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 1 08:34:08.150353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 1 08:34:08.156503 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 1 08:34:08.159049 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 1 08:34:08.162299 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:34:08.164014 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 1 08:34:08.171865 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 1 08:34:08.174380 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 1 08:34:08.177989 jq[1521]: false Jul 1 08:34:08.179785 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 1 08:34:08.180706 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 1 08:34:08.185300 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing passwd entry cache Jul 1 08:34:08.184941 oslogin_cache_refresh[1523]: Refreshing passwd entry cache Jul 1 08:34:08.187514 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 1 08:34:08.191403 systemd[1]: Starting update-engine.service - Update Engine... Jul 1 08:34:08.197451 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 1 08:34:08.199083 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting users, quitting Jul 1 08:34:08.199083 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:34:08.199083 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing group entry cache Jul 1 08:34:08.198575 oslogin_cache_refresh[1523]: Failure getting users, quitting Jul 1 08:34:08.198596 oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:34:08.198645 oslogin_cache_refresh[1523]: Refreshing group entry cache Jul 1 08:34:08.199477 extend-filesystems[1522]: Found /dev/vda6 Jul 1 08:34:08.214586 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting groups, quitting Jul 1 08:34:08.214586 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:34:08.213639 oslogin_cache_refresh[1523]: Failure getting groups, quitting Jul 1 08:34:08.213653 oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:34:08.219706 extend-filesystems[1522]: Found /dev/vda9 Jul 1 08:34:08.220124 extend-filesystems[1522]: Checking size of /dev/vda9 Jul 1 08:34:08.230398 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 1 08:34:08.230712 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 1 08:34:08.230879 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 1 08:34:08.231114 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 1 08:34:08.231368 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 1 08:34:08.245798 jq[1535]: true Jul 1 08:34:08.381108 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 1 08:34:08.381392 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 1 08:34:08.393041 systemd[1]: motdgen.service: Deactivated successfully. Jul 1 08:34:08.393225 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 1 08:34:08.419824 extend-filesystems[1522]: Resized partition /dev/vda9 Jul 1 08:34:08.428279 jq[1549]: true Jul 1 08:34:08.429067 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 1 08:34:08.431174 extend-filesystems[1565]: resize2fs 1.47.2 (1-Jan-2025) Jul 1 08:34:08.451508 update_engine[1534]: I20250701 08:34:08.450915 1534 main.cc:92] Flatcar Update Engine starting Jul 1 08:34:08.460355 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 1 08:34:08.490285 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 1 08:34:08.535181 update_engine[1534]: I20250701 08:34:08.534854 1534 update_check_scheduler.cc:74] Next update check in 3m9s Jul 1 08:34:08.526073 dbus-daemon[1519]: [system] SELinux support is enabled Jul 1 08:34:08.526382 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 1 08:34:08.531237 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 1 08:34:08.532892 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 1 08:34:08.533008 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 1 08:34:08.533025 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 1 08:34:08.534909 systemd[1]: Started update-engine.service - Update Engine. Jul 1 08:34:08.540485 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 1 08:34:08.543367 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 1 08:34:08.543367 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 1 08:34:08.543367 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 1 08:34:08.543987 extend-filesystems[1522]: Resized filesystem in /dev/vda9 Jul 1 08:34:08.545215 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 1 08:34:08.546795 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 1 08:34:08.587796 bash[1586]: Updated "/home/core/.ssh/authorized_keys" Jul 1 08:34:08.596297 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 1 08:34:08.601878 systemd[1]: Starting sshkeys.service... Jul 1 08:34:08.603478 tar[1547]: linux-amd64/LICENSE Jul 1 08:34:08.605724 tar[1547]: linux-amd64/helm Jul 1 08:34:08.642352 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 1 08:34:08.643884 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 1 08:34:08.670156 systemd-logind[1530]: New seat seat0. Jul 1 08:34:08.672941 systemd-logind[1530]: Watching system buttons on /dev/input/event2 (Power Button) Jul 1 08:34:08.672963 systemd-logind[1530]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 1 08:34:08.673120 systemd[1]: Started systemd-logind.service - User Login Management. Jul 1 08:34:08.677697 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:34:09.019858 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 1 08:34:09.041012 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 1 08:34:09.091726 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 1 08:34:09.095535 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 1 08:34:09.223500 systemd[1]: issuegen.service: Deactivated successfully. Jul 1 08:34:09.223713 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 1 08:34:09.229763 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:34:09.234631 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 1 08:34:09.283116 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 1 08:34:09.287901 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 1 08:34:09.293709 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 1 08:34:09.294121 systemd[1]: Reached target getty.target - Login Prompts. Jul 1 08:34:09.337026 containerd[1551]: time="2025-07-01T08:34:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 1 08:34:09.337747 containerd[1551]: time="2025-07-01T08:34:09.337718195Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 1 08:34:09.358958 containerd[1551]: time="2025-07-01T08:34:09.358877619Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.425µs" Jul 1 08:34:09.358958 containerd[1551]: time="2025-07-01T08:34:09.358917885Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 1 08:34:09.358958 containerd[1551]: time="2025-07-01T08:34:09.358940587Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 1 08:34:09.359479 containerd[1551]: time="2025-07-01T08:34:09.359161371Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 1 08:34:09.359479 containerd[1551]: time="2025-07-01T08:34:09.359183843Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 1 08:34:09.359479 containerd[1551]: time="2025-07-01T08:34:09.359220181Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:34:09.359479 containerd[1551]: time="2025-07-01T08:34:09.359312976Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:34:09.359479 containerd[1551]: time="2025-07-01T08:34:09.359327853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:34:09.359630 containerd[1551]: time="2025-07-01T08:34:09.359602509Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:34:09.359630 containerd[1551]: time="2025-07-01T08:34:09.359621514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:34:09.359677 containerd[1551]: time="2025-07-01T08:34:09.359633276Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:34:09.359677 containerd[1551]: time="2025-07-01T08:34:09.359642854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 1 08:34:09.359773 containerd[1551]: time="2025-07-01T08:34:09.359749755Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 1 08:34:09.360093 containerd[1551]: time="2025-07-01T08:34:09.359998070Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:34:09.360093 containerd[1551]: time="2025-07-01T08:34:09.360031613Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:34:09.360093 containerd[1551]: time="2025-07-01T08:34:09.360043736Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 1 08:34:09.360093 containerd[1551]: time="2025-07-01T08:34:09.360078501Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 1 08:34:09.360619 containerd[1551]: time="2025-07-01T08:34:09.360384966Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 1 08:34:09.360619 containerd[1551]: time="2025-07-01T08:34:09.360454607Z" level=info msg="metadata content store policy set" policy=shared Jul 1 08:34:09.371628 containerd[1551]: time="2025-07-01T08:34:09.371580353Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 1 08:34:09.371713 containerd[1551]: time="2025-07-01T08:34:09.371649332Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 1 08:34:09.371713 containerd[1551]: time="2025-07-01T08:34:09.371666234Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 1 08:34:09.371713 containerd[1551]: time="2025-07-01T08:34:09.371679208Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 1 08:34:09.371713 containerd[1551]: time="2025-07-01T08:34:09.371693916Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 1 08:34:09.371713 containerd[1551]: time="2025-07-01T08:34:09.371710317Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 1 08:34:09.371859 containerd[1551]: time="2025-07-01T08:34:09.371743669Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 1 08:34:09.371859 containerd[1551]: time="2025-07-01T08:34:09.371763416Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 1 08:34:09.371859 containerd[1551]: time="2025-07-01T08:34:09.371775739Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 1 08:34:09.371859 containerd[1551]: time="2025-07-01T08:34:09.371786610Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 1 08:34:09.371859 containerd[1551]: time="2025-07-01T08:34:09.371797039Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 1 08:34:09.371859 containerd[1551]: time="2025-07-01T08:34:09.371810685Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 1 08:34:09.372009 containerd[1551]: time="2025-07-01T08:34:09.371926051Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 1 08:34:09.372009 containerd[1551]: time="2025-07-01T08:34:09.371955737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 1 08:34:09.372009 containerd[1551]: time="2025-07-01T08:34:09.371992055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 1 08:34:09.372081 containerd[1551]: time="2025-07-01T08:34:09.372030177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 1 08:34:09.372081 containerd[1551]: time="2025-07-01T08:34:09.372043732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 1 08:34:09.372081 containerd[1551]: time="2025-07-01T08:34:09.372054823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 1 08:34:09.372081 containerd[1551]: time="2025-07-01T08:34:09.372066375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 1 08:34:09.372081 containerd[1551]: time="2025-07-01T08:34:09.372080170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 1 08:34:09.372228 containerd[1551]: time="2025-07-01T08:34:09.372092433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 1 08:34:09.372228 containerd[1551]: time="2025-07-01T08:34:09.372104757Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 1 08:34:09.372228 containerd[1551]: time="2025-07-01T08:34:09.372115336Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 1 08:34:09.372228 containerd[1551]: time="2025-07-01T08:34:09.372200737Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 1 08:34:09.372228 containerd[1551]: time="2025-07-01T08:34:09.372227286Z" level=info msg="Start snapshots syncer" Jul 1 08:34:09.373554 containerd[1551]: time="2025-07-01T08:34:09.372278693Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 1 08:34:09.373554 containerd[1551]: time="2025-07-01T08:34:09.372997821Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 1 08:34:09.373756 containerd[1551]: time="2025-07-01T08:34:09.373172499Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 1 08:34:09.373756 containerd[1551]: time="2025-07-01T08:34:09.373614017Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 1 08:34:09.373756 containerd[1551]: time="2025-07-01T08:34:09.373716479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 1 08:34:09.373756 containerd[1551]: time="2025-07-01T08:34:09.373746706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 1 08:34:09.373913 containerd[1551]: time="2025-07-01T08:34:09.373766142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 1 08:34:09.373913 containerd[1551]: time="2025-07-01T08:34:09.373783565Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 1 08:34:09.373913 containerd[1551]: time="2025-07-01T08:34:09.373822047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 1 08:34:09.373913 containerd[1551]: time="2025-07-01T08:34:09.373852164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 1 08:34:09.373913 containerd[1551]: time="2025-07-01T08:34:09.373870819Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 1 08:34:09.373913 containerd[1551]: time="2025-07-01T08:34:09.373906696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 1 08:34:09.374049 containerd[1551]: time="2025-07-01T08:34:09.373924670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 1 08:34:09.374049 containerd[1551]: time="2025-07-01T08:34:09.373942603Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 1 08:34:09.374049 containerd[1551]: time="2025-07-01T08:34:09.373977489Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:34:09.374049 containerd[1551]: time="2025-07-01T08:34:09.373999250Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:34:09.374049 containerd[1551]: time="2025-07-01T08:34:09.374013336Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:34:09.374049 containerd[1551]: time="2025-07-01T08:34:09.374028184Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:34:09.374049 containerd[1551]: time="2025-07-01T08:34:09.374037692Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 1 08:34:09.374207 containerd[1551]: time="2025-07-01T08:34:09.374052339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 1 08:34:09.374207 containerd[1551]: time="2025-07-01T08:34:09.374071826Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 1 08:34:09.374207 containerd[1551]: time="2025-07-01T08:34:09.374093957Z" level=info msg="runtime interface created" Jul 1 08:34:09.374207 containerd[1551]: time="2025-07-01T08:34:09.374104176Z" level=info msg="created NRI interface" Jul 1 08:34:09.374207 containerd[1551]: time="2025-07-01T08:34:09.374113394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 1 08:34:09.374207 containerd[1551]: time="2025-07-01T08:34:09.374128843Z" level=info msg="Connect containerd service" Jul 1 08:34:09.374207 containerd[1551]: time="2025-07-01T08:34:09.374159360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 1 08:34:09.376212 containerd[1551]: time="2025-07-01T08:34:09.376159742Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:34:09.435389 tar[1547]: linux-amd64/README.md Jul 1 08:34:09.450926 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 1 08:34:09.806738 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:34:09.802515 systemd-networkd[1438]: eth0: Gained IPv6LL Jul 1 08:34:09.804150 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Jul 1 08:34:09.808710 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 1 08:34:09.812382 systemd[1]: Reached target network-online.target - Network is Online. Jul 1 08:34:09.828064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:34:09.833280 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 1 08:34:09.849121 containerd[1551]: time="2025-07-01T08:34:09.849033376Z" level=info msg="Start subscribing containerd event" Jul 1 08:34:09.849207 containerd[1551]: time="2025-07-01T08:34:09.849150055Z" level=info msg="Start recovering state" Jul 1 08:34:09.849429 containerd[1551]: time="2025-07-01T08:34:09.849407387Z" level=info msg="Start event monitor" Jul 1 08:34:09.849485 containerd[1551]: time="2025-07-01T08:34:09.849450378Z" level=info msg="Start cni network conf syncer for default" Jul 1 08:34:09.849485 containerd[1551]: time="2025-07-01T08:34:09.849462371Z" level=info msg="Start streaming server" Jul 1 08:34:09.849485 containerd[1551]: time="2025-07-01T08:34:09.849482929Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 1 08:34:09.849557 containerd[1551]: time="2025-07-01T08:34:09.849506032Z" level=info msg="runtime interface starting up..." Jul 1 08:34:09.849557 containerd[1551]: time="2025-07-01T08:34:09.849515520Z" level=info msg="starting plugins..." Jul 1 08:34:09.849557 containerd[1551]: time="2025-07-01T08:34:09.849539395Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 1 08:34:09.850067 containerd[1551]: time="2025-07-01T08:34:09.850026699Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 1 08:34:09.850138 containerd[1551]: time="2025-07-01T08:34:09.850087964Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 1 08:34:09.850282 containerd[1551]: time="2025-07-01T08:34:09.850214872Z" level=info msg="containerd successfully booted in 0.513594s" Jul 1 08:34:09.850431 systemd[1]: Started containerd.service - containerd container runtime. Jul 1 08:34:09.869132 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 1 08:34:11.257375 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:34:11.864312 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:34:12.646120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:34:12.666986 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:34:13.455906 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 1 08:34:13.458747 systemd[1]: Started sshd@0-172.24.4.49:22-172.24.4.1:45124.service - OpenSSH per-connection server daemon (172.24.4.1:45124). Jul 1 08:34:13.948411 kubelet[1656]: E0701 08:34:13.948061 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:34:13.955997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:34:13.956603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:34:13.957907 systemd[1]: kubelet.service: Consumed 2.885s CPU time, 269.3M memory peak. Jul 1 08:34:14.398432 login[1617]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 1 08:34:14.407331 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 1 08:34:14.433075 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 1 08:34:14.438746 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 1 08:34:14.472572 systemd-logind[1530]: New session 2 of user core. Jul 1 08:34:14.487984 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 1 08:34:14.493772 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 1 08:34:14.507861 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 1 08:34:14.510466 systemd-logind[1530]: New session c1 of user core. Jul 1 08:34:14.752131 systemd[1673]: Queued start job for default target default.target. Jul 1 08:34:14.777930 systemd[1673]: Created slice app.slice - User Application Slice. Jul 1 08:34:14.778364 systemd[1673]: Reached target paths.target - Paths. Jul 1 08:34:14.778718 systemd[1673]: Reached target timers.target - Timers. Jul 1 08:34:14.782587 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 1 08:34:14.836720 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 1 08:34:14.837636 systemd[1673]: Reached target sockets.target - Sockets. Jul 1 08:34:14.838343 systemd[1673]: Reached target basic.target - Basic System. Jul 1 08:34:14.838671 systemd[1673]: Reached target default.target - Main User Target. Jul 1 08:34:14.838744 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 1 08:34:14.839237 systemd[1673]: Startup finished in 321ms. Jul 1 08:34:14.852765 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 1 08:34:15.041676 sshd[1663]: Accepted publickey for core from 172.24.4.1 port 45124 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:34:15.043729 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:34:15.059380 systemd-logind[1530]: New session 3 of user core. Jul 1 08:34:15.075105 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 1 08:34:15.332403 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:34:15.363890 coreos-metadata[1518]: Jul 01 08:34:15.363 WARN failed to locate config-drive, using the metadata service API instead Jul 1 08:34:15.401225 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 1 08:34:15.428732 systemd-logind[1530]: New session 1 of user core. Jul 1 08:34:15.441721 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 1 08:34:15.505605 coreos-metadata[1518]: Jul 01 08:34:15.504 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 1 08:34:15.665220 systemd[1]: Started sshd@1-172.24.4.49:22-172.24.4.1:45134.service - OpenSSH per-connection server daemon (172.24.4.1:45134). Jul 1 08:34:15.695368 coreos-metadata[1518]: Jul 01 08:34:15.695 INFO Fetch successful Jul 1 08:34:15.695803 coreos-metadata[1518]: Jul 01 08:34:15.695 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 1 08:34:15.710435 coreos-metadata[1518]: Jul 01 08:34:15.710 INFO Fetch successful Jul 1 08:34:15.710793 coreos-metadata[1518]: Jul 01 08:34:15.710 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 1 08:34:15.724719 coreos-metadata[1518]: Jul 01 08:34:15.724 INFO Fetch successful Jul 1 08:34:15.724719 coreos-metadata[1518]: Jul 01 08:34:15.724 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 1 08:34:15.736834 coreos-metadata[1518]: Jul 01 08:34:15.736 INFO Fetch successful Jul 1 08:34:15.737464 coreos-metadata[1518]: Jul 01 08:34:15.737 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 1 08:34:15.749217 coreos-metadata[1518]: Jul 01 08:34:15.749 INFO Fetch successful Jul 1 08:34:15.749217 coreos-metadata[1518]: Jul 01 08:34:15.749 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 1 08:34:15.761115 coreos-metadata[1518]: Jul 01 08:34:15.761 INFO Fetch successful Jul 1 08:34:15.824074 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 1 08:34:15.828206 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 1 08:34:15.892329 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 1 08:34:15.906659 coreos-metadata[1590]: Jul 01 08:34:15.906 WARN failed to locate config-drive, using the metadata service API instead Jul 1 08:34:15.953948 coreos-metadata[1590]: Jul 01 08:34:15.952 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 1 08:34:15.967435 coreos-metadata[1590]: Jul 01 08:34:15.967 INFO Fetch successful Jul 1 08:34:15.967435 coreos-metadata[1590]: Jul 01 08:34:15.967 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 1 08:34:15.982808 coreos-metadata[1590]: Jul 01 08:34:15.982 INFO Fetch successful Jul 1 08:34:15.999619 unknown[1590]: wrote ssh authorized keys file for user: core Jul 1 08:34:16.185248 update-ssh-keys[1719]: Updated "/home/core/.ssh/authorized_keys" Jul 1 08:34:16.187939 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 1 08:34:16.194580 systemd[1]: Finished sshkeys.service. Jul 1 08:34:16.200144 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 1 08:34:16.200994 systemd[1]: Startup finished in 3.900s (kernel) + 17.752s (initrd) + 11.806s (userspace) = 33.459s. Jul 1 08:34:17.047179 sshd[1709]: Accepted publickey for core from 172.24.4.1 port 45134 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:34:17.050796 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:34:17.062470 systemd-logind[1530]: New session 4 of user core. Jul 1 08:34:17.067624 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 1 08:34:17.689865 sshd[1723]: Connection closed by 172.24.4.1 port 45134 Jul 1 08:34:17.689179 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jul 1 08:34:17.719314 systemd[1]: sshd@1-172.24.4.49:22-172.24.4.1:45134.service: Deactivated successfully. Jul 1 08:34:17.723557 systemd[1]: session-4.scope: Deactivated successfully. Jul 1 08:34:17.726963 systemd-logind[1530]: Session 4 logged out. Waiting for processes to exit. Jul 1 08:34:17.734670 systemd[1]: Started sshd@2-172.24.4.49:22-172.24.4.1:45142.service - OpenSSH per-connection server daemon (172.24.4.1:45142). Jul 1 08:34:17.737318 systemd-logind[1530]: Removed session 4. Jul 1 08:34:18.930920 sshd[1729]: Accepted publickey for core from 172.24.4.1 port 45142 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:34:18.934704 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:34:18.948325 systemd-logind[1530]: New session 5 of user core. Jul 1 08:34:18.959445 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 1 08:34:19.572239 sshd[1732]: Connection closed by 172.24.4.1 port 45142 Jul 1 08:34:19.574045 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 1 08:34:19.594059 systemd[1]: sshd@2-172.24.4.49:22-172.24.4.1:45142.service: Deactivated successfully. Jul 1 08:34:19.598635 systemd[1]: session-5.scope: Deactivated successfully. Jul 1 08:34:19.600914 systemd-logind[1530]: Session 5 logged out. Waiting for processes to exit. Jul 1 08:34:19.611788 systemd[1]: Started sshd@3-172.24.4.49:22-172.24.4.1:45154.service - OpenSSH per-connection server daemon (172.24.4.1:45154). Jul 1 08:34:19.615583 systemd-logind[1530]: Removed session 5. Jul 1 08:34:20.775189 sshd[1738]: Accepted publickey for core from 172.24.4.1 port 45154 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:34:20.778854 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:34:20.794488 systemd-logind[1530]: New session 6 of user core. Jul 1 08:34:20.801673 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 1 08:34:21.415985 sshd[1741]: Connection closed by 172.24.4.1 port 45154 Jul 1 08:34:21.418674 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jul 1 08:34:21.439869 systemd[1]: sshd@3-172.24.4.49:22-172.24.4.1:45154.service: Deactivated successfully. Jul 1 08:34:21.446468 systemd[1]: session-6.scope: Deactivated successfully. Jul 1 08:34:21.451125 systemd-logind[1530]: Session 6 logged out. Waiting for processes to exit. Jul 1 08:34:21.461742 systemd[1]: Started sshd@4-172.24.4.49:22-172.24.4.1:45170.service - OpenSSH per-connection server daemon (172.24.4.1:45170). Jul 1 08:34:21.468716 systemd-logind[1530]: Removed session 6. Jul 1 08:34:22.886595 sshd[1747]: Accepted publickey for core from 172.24.4.1 port 45170 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:34:22.889814 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:34:22.904391 systemd-logind[1530]: New session 7 of user core. Jul 1 08:34:22.915562 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 1 08:34:23.476200 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 1 08:34:23.477959 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:34:23.497998 sudo[1751]: pam_unix(sudo:session): session closed for user root Jul 1 08:34:23.762664 sshd[1750]: Connection closed by 172.24.4.1 port 45170 Jul 1 08:34:23.764090 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jul 1 08:34:23.782520 systemd[1]: sshd@4-172.24.4.49:22-172.24.4.1:45170.service: Deactivated successfully. Jul 1 08:34:23.786648 systemd[1]: session-7.scope: Deactivated successfully. Jul 1 08:34:23.788955 systemd-logind[1530]: Session 7 logged out. Waiting for processes to exit. Jul 1 08:34:23.795873 systemd[1]: Started sshd@5-172.24.4.49:22-172.24.4.1:56596.service - OpenSSH per-connection server daemon (172.24.4.1:56596). Jul 1 08:34:23.798031 systemd-logind[1530]: Removed session 7. Jul 1 08:34:24.208511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 1 08:34:24.214036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:34:24.972938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:34:24.990312 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:34:25.124771 sshd[1757]: Accepted publickey for core from 172.24.4.1 port 56596 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:34:25.129583 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:34:25.146587 systemd-logind[1530]: New session 8 of user core. Jul 1 08:34:25.162156 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 1 08:34:25.215704 kubelet[1768]: E0701 08:34:25.215554 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:34:25.222902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:34:25.223173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:34:25.224374 systemd[1]: kubelet.service: Consumed 875ms CPU time, 108.8M memory peak. Jul 1 08:34:25.593149 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 1 08:34:25.593953 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:34:25.609234 sudo[1777]: pam_unix(sudo:session): session closed for user root Jul 1 08:34:25.623624 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 1 08:34:25.624245 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:34:25.653217 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:34:25.770012 augenrules[1799]: No rules Jul 1 08:34:25.771937 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:34:25.772873 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:34:25.775777 sudo[1776]: pam_unix(sudo:session): session closed for user root Jul 1 08:34:25.985471 sshd[1774]: Connection closed by 172.24.4.1 port 56596 Jul 1 08:34:25.986821 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jul 1 08:34:26.007531 systemd[1]: sshd@5-172.24.4.49:22-172.24.4.1:56596.service: Deactivated successfully. Jul 1 08:34:26.012080 systemd[1]: session-8.scope: Deactivated successfully. Jul 1 08:34:26.015333 systemd-logind[1530]: Session 8 logged out. Waiting for processes to exit. Jul 1 08:34:26.021927 systemd[1]: Started sshd@6-172.24.4.49:22-172.24.4.1:56602.service - OpenSSH per-connection server daemon (172.24.4.1:56602). Jul 1 08:34:26.024105 systemd-logind[1530]: Removed session 8. Jul 1 08:34:27.196029 sshd[1808]: Accepted publickey for core from 172.24.4.1 port 56602 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:34:27.199188 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:34:27.214347 systemd-logind[1530]: New session 9 of user core. Jul 1 08:34:27.228687 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 1 08:34:27.548223 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 1 08:34:27.549045 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:34:28.990172 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 1 08:34:29.005634 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 1 08:34:29.968830 dockerd[1830]: time="2025-07-01T08:34:29.968649751Z" level=info msg="Starting up" Jul 1 08:34:29.971501 dockerd[1830]: time="2025-07-01T08:34:29.971379892Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 1 08:34:30.058373 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3202554399-merged.mount: Deactivated successfully. Jul 1 08:34:30.104209 dockerd[1830]: time="2025-07-01T08:34:30.104106741Z" level=info msg="Loading containers: start." Jul 1 08:34:30.135386 kernel: Initializing XFRM netlink socket Jul 1 08:34:30.607804 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Jul 1 08:34:30.672342 systemd-networkd[1438]: docker0: Link UP Jul 1 08:34:30.679756 dockerd[1830]: time="2025-07-01T08:34:30.679698894Z" level=info msg="Loading containers: done." Jul 1 08:34:30.702468 dockerd[1830]: time="2025-07-01T08:34:30.700829424Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 1 08:34:30.702468 dockerd[1830]: time="2025-07-01T08:34:30.700938399Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 1 08:34:30.702468 dockerd[1830]: time="2025-07-01T08:34:30.701072821Z" level=info msg="Initializing buildkit" Jul 1 08:34:30.701950 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck771789643-merged.mount: Deactivated successfully. Jul 1 08:34:31.407542 systemd-resolved[1441]: Clock change detected. Flushing caches. Jul 1 08:34:31.409043 systemd-timesyncd[1459]: Contacted time server 192.155.94.72:123 (2.flatcar.pool.ntp.org). Jul 1 08:34:31.409709 systemd-timesyncd[1459]: Initial clock synchronization to Tue 2025-07-01 08:34:31.406647 UTC. Jul 1 08:34:31.448778 dockerd[1830]: time="2025-07-01T08:34:31.448609653Z" level=info msg="Completed buildkit initialization" Jul 1 08:34:31.466167 dockerd[1830]: time="2025-07-01T08:34:31.466049862Z" level=info msg="Daemon has completed initialization" Jul 1 08:34:31.466517 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 1 08:34:31.468190 dockerd[1830]: time="2025-07-01T08:34:31.466522078Z" level=info msg="API listen on /run/docker.sock" Jul 1 08:34:32.985126 containerd[1551]: time="2025-07-01T08:34:32.984824414Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 1 08:34:33.885608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042017381.mount: Deactivated successfully. Jul 1 08:34:35.941307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 1 08:34:35.946202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:34:36.285719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:34:36.306405 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:34:36.475949 kubelet[2094]: E0701 08:34:36.475854 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:34:36.479027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:34:36.479479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:34:36.480268 systemd[1]: kubelet.service: Consumed 416ms CPU time, 110.4M memory peak. Jul 1 08:34:36.486363 containerd[1551]: time="2025-07-01T08:34:36.486307885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:36.488916 containerd[1551]: time="2025-07-01T08:34:36.488877504Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jul 1 08:34:36.490601 containerd[1551]: time="2025-07-01T08:34:36.490566592Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:36.495273 containerd[1551]: time="2025-07-01T08:34:36.495225320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:36.497669 containerd[1551]: time="2025-07-01T08:34:36.497603981Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 3.511252613s" Jul 1 08:34:36.497669 containerd[1551]: time="2025-07-01T08:34:36.497659255Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 1 08:34:36.498272 containerd[1551]: time="2025-07-01T08:34:36.498241486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 1 08:34:39.279105 containerd[1551]: time="2025-07-01T08:34:39.279018062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:39.280808 containerd[1551]: time="2025-07-01T08:34:39.280536360Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jul 1 08:34:39.282101 containerd[1551]: time="2025-07-01T08:34:39.282047013Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:39.285339 containerd[1551]: time="2025-07-01T08:34:39.285302449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:39.286431 containerd[1551]: time="2025-07-01T08:34:39.286396290Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 2.788114749s" Jul 1 08:34:39.286602 containerd[1551]: time="2025-07-01T08:34:39.286581668Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 1 08:34:39.287416 containerd[1551]: time="2025-07-01T08:34:39.287376609Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 1 08:34:41.914590 containerd[1551]: time="2025-07-01T08:34:41.914385954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:41.932623 containerd[1551]: time="2025-07-01T08:34:41.932516127Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jul 1 08:34:41.956325 containerd[1551]: time="2025-07-01T08:34:41.956196620Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:41.983931 containerd[1551]: time="2025-07-01T08:34:41.983821510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:41.988879 containerd[1551]: time="2025-07-01T08:34:41.988718565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 2.701287354s" Jul 1 08:34:41.989429 containerd[1551]: time="2025-07-01T08:34:41.989174340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 1 08:34:41.994296 containerd[1551]: time="2025-07-01T08:34:41.994225814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 1 08:34:43.735786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022694714.mount: Deactivated successfully. Jul 1 08:34:44.823769 containerd[1551]: time="2025-07-01T08:34:44.823621294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:44.825584 containerd[1551]: time="2025-07-01T08:34:44.825551384Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jul 1 08:34:44.826352 containerd[1551]: time="2025-07-01T08:34:44.826295249Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:44.829257 containerd[1551]: time="2025-07-01T08:34:44.829205407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:44.831032 containerd[1551]: time="2025-07-01T08:34:44.830853969Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.836563103s" Jul 1 08:34:44.831032 containerd[1551]: time="2025-07-01T08:34:44.830896228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 1 08:34:44.831935 containerd[1551]: time="2025-07-01T08:34:44.831678245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 1 08:34:45.580563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523593936.mount: Deactivated successfully. Jul 1 08:34:46.691704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 1 08:34:46.696246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:34:47.170193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:34:47.180379 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:34:47.259396 kubelet[2177]: E0701 08:34:47.259334 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:34:47.263390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:34:47.263535 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:34:47.264636 systemd[1]: kubelet.service: Consumed 240ms CPU time, 109.9M memory peak. Jul 1 08:34:47.673046 containerd[1551]: time="2025-07-01T08:34:47.672869799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:47.675880 containerd[1551]: time="2025-07-01T08:34:47.675655534Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jul 1 08:34:48.105144 containerd[1551]: time="2025-07-01T08:34:48.104586163Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:48.116320 containerd[1551]: time="2025-07-01T08:34:48.116234650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:48.121104 containerd[1551]: time="2025-07-01T08:34:48.120811313Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.289081351s" Jul 1 08:34:48.121104 containerd[1551]: time="2025-07-01T08:34:48.120936769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 1 08:34:48.124456 containerd[1551]: time="2025-07-01T08:34:48.124378774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 1 08:34:48.741279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429832808.mount: Deactivated successfully. Jul 1 08:34:48.753131 containerd[1551]: time="2025-07-01T08:34:48.752945871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:34:48.755801 containerd[1551]: time="2025-07-01T08:34:48.754955010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 1 08:34:48.757628 containerd[1551]: time="2025-07-01T08:34:48.757553623Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:34:48.766379 containerd[1551]: time="2025-07-01T08:34:48.766285981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:34:48.768375 containerd[1551]: time="2025-07-01T08:34:48.768288306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 643.830304ms" Jul 1 08:34:48.768375 containerd[1551]: time="2025-07-01T08:34:48.768363617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 1 08:34:48.769437 containerd[1551]: time="2025-07-01T08:34:48.769362621Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 1 08:34:49.368969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196515361.mount: Deactivated successfully. Jul 1 08:34:52.534850 containerd[1551]: time="2025-07-01T08:34:52.534767727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:52.536573 containerd[1551]: time="2025-07-01T08:34:52.536501609Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jul 1 08:34:52.538019 containerd[1551]: time="2025-07-01T08:34:52.537933956Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:52.541846 containerd[1551]: time="2025-07-01T08:34:52.541781602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:34:52.543919 containerd[1551]: time="2025-07-01T08:34:52.543818181Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.774383785s" Jul 1 08:34:52.543919 containerd[1551]: time="2025-07-01T08:34:52.543877373Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 1 08:34:54.430188 update_engine[1534]: I20250701 08:34:54.429282 1534 update_attempter.cc:509] Updating boot flags... Jul 1 08:34:57.441929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 1 08:34:57.446233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:34:58.301185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:34:58.319379 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:34:58.371042 kubelet[2290]: E0701 08:34:58.370989 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:34:58.374868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:34:58.375005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:34:58.375342 systemd[1]: kubelet.service: Consumed 228ms CPU time, 109.8M memory peak. Jul 1 08:34:58.792539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:34:58.793545 systemd[1]: kubelet.service: Consumed 228ms CPU time, 109.8M memory peak. Jul 1 08:34:58.796457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:34:58.845265 systemd[1]: Reload requested from client PID 2304 ('systemctl') (unit session-9.scope)... Jul 1 08:34:58.845384 systemd[1]: Reloading... Jul 1 08:34:58.955155 zram_generator::config[2345]: No configuration found. Jul 1 08:34:59.112668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:34:59.259684 systemd[1]: Reloading finished in 413 ms. Jul 1 08:34:59.429914 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 1 08:34:59.430201 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 1 08:34:59.430963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:34:59.435943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:34:59.729812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:34:59.751735 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:34:59.831166 kubelet[2413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:34:59.831166 kubelet[2413]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 1 08:34:59.831166 kubelet[2413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:34:59.831936 kubelet[2413]: I0701 08:34:59.831250 2413 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:35:00.776223 kubelet[2413]: I0701 08:35:00.776140 2413 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 1 08:35:00.776223 kubelet[2413]: I0701 08:35:00.776174 2413 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:35:00.776749 kubelet[2413]: I0701 08:35:00.776699 2413 server.go:956] "Client rotation is on, will bootstrap in background" Jul 1 08:35:00.894126 kubelet[2413]: I0701 08:35:00.893030 2413 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:35:00.895156 kubelet[2413]: E0701 08:35:00.894519 2413 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.24.4.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 1 08:35:00.920967 kubelet[2413]: I0701 08:35:00.920910 2413 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:35:00.933880 kubelet[2413]: I0701 08:35:00.933824 2413 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:35:00.935551 kubelet[2413]: I0701 08:35:00.935347 2413 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:35:00.936234 kubelet[2413]: I0701 08:35:00.935408 2413 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-9-s-39d8ad6622.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:35:00.937659 kubelet[2413]: I0701 08:35:00.936893 2413 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:35:00.937659 kubelet[2413]: I0701 08:35:00.936950 2413 container_manager_linux.go:303] "Creating device plugin manager" Jul 1 08:35:00.937659 kubelet[2413]: I0701 08:35:00.937450 2413 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:35:00.944013 kubelet[2413]: I0701 08:35:00.943969 2413 kubelet.go:480] "Attempting to sync node with API server" Jul 1 08:35:00.944334 kubelet[2413]: I0701 08:35:00.944299 2413 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:35:00.948288 kubelet[2413]: I0701 08:35:00.947676 2413 kubelet.go:386] "Adding apiserver pod source" Jul 1 08:35:00.948288 kubelet[2413]: I0701 08:35:00.947781 2413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:35:00.964159 kubelet[2413]: E0701 08:35:00.964036 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-9-s-39d8ad6622.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 1 08:35:00.965107 kubelet[2413]: I0701 08:35:00.964730 2413 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:35:00.966777 kubelet[2413]: I0701 08:35:00.966718 2413 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 1 08:35:00.970401 kubelet[2413]: W0701 08:35:00.970360 2413 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 1 08:35:00.984849 kubelet[2413]: I0701 08:35:00.984776 2413 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 1 08:35:00.985026 kubelet[2413]: I0701 08:35:00.984995 2413 server.go:1289] "Started kubelet" Jul 1 08:35:00.987997 kubelet[2413]: E0701 08:35:00.987969 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 1 08:35:00.989309 kubelet[2413]: I0701 08:35:00.989258 2413 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:35:00.991707 kubelet[2413]: I0701 08:35:00.991689 2413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:35:00.999040 kubelet[2413]: I0701 08:35:00.998897 2413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:35:00.999977 kubelet[2413]: I0701 08:35:00.999934 2413 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:35:01.001728 kubelet[2413]: I0701 08:35:01.001708 2413 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:35:01.003748 kubelet[2413]: I0701 08:35:01.003731 2413 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 1 08:35:01.005816 kubelet[2413]: E0701 08:35:01.005792 2413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" Jul 1 08:35:01.008986 kubelet[2413]: I0701 08:35:01.008962 2413 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 1 08:35:01.009260 kubelet[2413]: I0701 08:35:01.009245 2413 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:35:01.011759 kubelet[2413]: E0701 08:35:01.008011 2413 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.49:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999-9-9-s-39d8ad6622.novalocal.184e13a42f6c44fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999-9-9-s-39d8ad6622.novalocal,UID:ci-9999-9-9-s-39d8ad6622.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999-9-9-s-39d8ad6622.novalocal,},FirstTimestamp:2025-07-01 08:35:00.984890621 +0000 UTC m=+1.225687458,LastTimestamp:2025-07-01 08:35:00.984890621 +0000 UTC m=+1.225687458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-9-s-39d8ad6622.novalocal,}" Jul 1 08:35:01.015984 kubelet[2413]: E0701 08:35:01.015879 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-s-39d8ad6622.novalocal?timeout=10s\": dial tcp 172.24.4.49:6443: connect: connection refused" interval="200ms" Jul 1 08:35:01.017475 kubelet[2413]: E0701 08:35:01.017364 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 1 08:35:01.019281 kubelet[2413]: I0701 08:35:01.019242 2413 server.go:317] "Adding debug handlers to kubelet server" Jul 1 08:35:01.023107 kubelet[2413]: I0701 08:35:01.022348 2413 factory.go:223] Registration of the containerd container factory successfully Jul 1 08:35:01.023107 kubelet[2413]: I0701 08:35:01.022381 2413 factory.go:223] Registration of the systemd container factory successfully Jul 1 08:35:01.023107 kubelet[2413]: I0701 08:35:01.022519 2413 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:35:01.037104 kubelet[2413]: I0701 08:35:01.036893 2413 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 1 08:35:01.042616 kubelet[2413]: I0701 08:35:01.042534 2413 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 1 08:35:01.042616 kubelet[2413]: I0701 08:35:01.042578 2413 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 1 08:35:01.042878 kubelet[2413]: I0701 08:35:01.042849 2413 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 1 08:35:01.043000 kubelet[2413]: I0701 08:35:01.042987 2413 kubelet.go:2436] "Starting kubelet main sync loop" Jul 1 08:35:01.043263 kubelet[2413]: E0701 08:35:01.043194 2413 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:35:01.048156 kubelet[2413]: E0701 08:35:01.048128 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 1 08:35:01.053870 kubelet[2413]: I0701 08:35:01.053846 2413 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 1 08:35:01.054047 kubelet[2413]: I0701 08:35:01.054008 2413 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 1 08:35:01.054265 kubelet[2413]: I0701 08:35:01.054214 2413 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:35:01.061081 kubelet[2413]: I0701 08:35:01.060844 2413 policy_none.go:49] "None policy: Start" Jul 1 08:35:01.061081 kubelet[2413]: I0701 08:35:01.060912 2413 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 1 08:35:01.061081 kubelet[2413]: I0701 08:35:01.060954 2413 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:35:01.073793 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 1 08:35:01.089075 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 1 08:35:01.093869 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 1 08:35:01.104810 kubelet[2413]: E0701 08:35:01.104423 2413 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 1 08:35:01.104810 kubelet[2413]: I0701 08:35:01.104641 2413 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:35:01.104810 kubelet[2413]: I0701 08:35:01.104668 2413 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:35:01.105274 kubelet[2413]: I0701 08:35:01.105185 2413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:35:01.107461 kubelet[2413]: E0701 08:35:01.107046 2413 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 1 08:35:01.107581 kubelet[2413]: E0701 08:35:01.107517 2413 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" Jul 1 08:35:01.164180 systemd[1]: Created slice kubepods-burstable-pod096de24328c9676aab98b000546c4460.slice - libcontainer container kubepods-burstable-pod096de24328c9676aab98b000546c4460.slice. Jul 1 08:35:01.176941 kubelet[2413]: E0701 08:35:01.176599 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.183254 systemd[1]: Created slice kubepods-burstable-pod19d5fd3b369c71ac2860ad96d108053b.slice - libcontainer container kubepods-burstable-pod19d5fd3b369c71ac2860ad96d108053b.slice. Jul 1 08:35:01.201757 kubelet[2413]: E0701 08:35:01.201036 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.210147 systemd[1]: Created slice kubepods-burstable-podee2b7941fabf6fbe1f7f2e22150b16a6.slice - libcontainer container kubepods-burstable-podee2b7941fabf6fbe1f7f2e22150b16a6.slice. Jul 1 08:35:01.211703 kubelet[2413]: I0701 08:35:01.211563 2413 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212334 kubelet[2413]: I0701 08:35:01.212251 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/096de24328c9676aab98b000546c4460-k8s-certs\") pod \"kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"096de24328c9676aab98b000546c4460\") " pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212629 kubelet[2413]: I0701 08:35:01.212337 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/096de24328c9676aab98b000546c4460-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"096de24328c9676aab98b000546c4460\") " pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212629 kubelet[2413]: I0701 08:35:01.212397 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212629 kubelet[2413]: I0701 08:35:01.212453 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212629 kubelet[2413]: I0701 08:35:01.212517 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212978 kubelet[2413]: I0701 08:35:01.212579 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-ca-certs\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212978 kubelet[2413]: I0701 08:35:01.212621 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212978 kubelet[2413]: I0701 08:35:01.212690 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee2b7941fabf6fbe1f7f2e22150b16a6-kubeconfig\") pod \"kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"ee2b7941fabf6fbe1f7f2e22150b16a6\") " pod="kube-system/kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.212978 kubelet[2413]: I0701 08:35:01.212744 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/096de24328c9676aab98b000546c4460-ca-certs\") pod \"kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"096de24328c9676aab98b000546c4460\") " pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.214334 kubelet[2413]: E0701 08:35:01.214260 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.49:6443/api/v1/nodes\": dial tcp 172.24.4.49:6443: connect: connection refused" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.218605 kubelet[2413]: E0701 08:35:01.218390 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-s-39d8ad6622.novalocal?timeout=10s\": dial tcp 172.24.4.49:6443: connect: connection refused" interval="400ms" Jul 1 08:35:01.219230 kubelet[2413]: E0701 08:35:01.219138 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.419710 kubelet[2413]: I0701 08:35:01.419371 2413 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.421436 kubelet[2413]: E0701 08:35:01.421365 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.49:6443/api/v1/nodes\": dial tcp 172.24.4.49:6443: connect: connection refused" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.481098 containerd[1551]: time="2025-07-01T08:35:01.480932637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal,Uid:096de24328c9676aab98b000546c4460,Namespace:kube-system,Attempt:0,}" Jul 1 08:35:01.504729 containerd[1551]: time="2025-07-01T08:35:01.503857793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal,Uid:19d5fd3b369c71ac2860ad96d108053b,Namespace:kube-system,Attempt:0,}" Jul 1 08:35:01.522413 containerd[1551]: time="2025-07-01T08:35:01.522317334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal,Uid:ee2b7941fabf6fbe1f7f2e22150b16a6,Namespace:kube-system,Attempt:0,}" Jul 1 08:35:01.620268 kubelet[2413]: E0701 08:35:01.620141 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-s-39d8ad6622.novalocal?timeout=10s\": dial tcp 172.24.4.49:6443: connect: connection refused" interval="800ms" Jul 1 08:35:01.699184 containerd[1551]: time="2025-07-01T08:35:01.699131199Z" level=info msg="connecting to shim 8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564" address="unix:///run/containerd/s/21759f0e7be244e7cf159961ece0d17b73d35ad02ad783a91484f5a9505d1d0c" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:35:01.703317 containerd[1551]: time="2025-07-01T08:35:01.703282094Z" level=info msg="connecting to shim 8a05b52ad964c33b8554fba45416eea9e732d072c5b6dd92f52b3d386f86b60a" address="unix:///run/containerd/s/884f0602dc6862425d8207229a8abcb64003a9dd7adae5ebd7b0c79b2f983cae" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:35:01.704988 containerd[1551]: time="2025-07-01T08:35:01.704941336Z" level=info msg="connecting to shim 40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b" address="unix:///run/containerd/s/0163f98ad37e7e43c26b5c1ce4d28224590f7d4adc3c307b3c7c94cb4b012da4" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:35:01.761250 systemd[1]: Started cri-containerd-8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564.scope - libcontainer container 8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564. Jul 1 08:35:01.776388 systemd[1]: Started cri-containerd-8a05b52ad964c33b8554fba45416eea9e732d072c5b6dd92f52b3d386f86b60a.scope - libcontainer container 8a05b52ad964c33b8554fba45416eea9e732d072c5b6dd92f52b3d386f86b60a. Jul 1 08:35:01.796416 systemd[1]: Started cri-containerd-40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b.scope - libcontainer container 40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b. Jul 1 08:35:01.850020 kubelet[2413]: I0701 08:35:01.849981 2413 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.850690 kubelet[2413]: E0701 08:35:01.850656 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.49:6443/api/v1/nodes\": dial tcp 172.24.4.49:6443: connect: connection refused" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:01.893640 containerd[1551]: time="2025-07-01T08:35:01.893221776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal,Uid:096de24328c9676aab98b000546c4460,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a05b52ad964c33b8554fba45416eea9e732d072c5b6dd92f52b3d386f86b60a\"" Jul 1 08:35:01.910988 containerd[1551]: time="2025-07-01T08:35:01.910127342Z" level=info msg="CreateContainer within sandbox \"8a05b52ad964c33b8554fba45416eea9e732d072c5b6dd92f52b3d386f86b60a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 1 08:35:01.913145 containerd[1551]: time="2025-07-01T08:35:01.913050865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal,Uid:ee2b7941fabf6fbe1f7f2e22150b16a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b\"" Jul 1 08:35:01.922004 containerd[1551]: time="2025-07-01T08:35:01.921940698Z" level=info msg="CreateContainer within sandbox \"40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 1 08:35:01.929619 containerd[1551]: time="2025-07-01T08:35:01.929553015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal,Uid:19d5fd3b369c71ac2860ad96d108053b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564\"" Jul 1 08:35:01.941552 containerd[1551]: time="2025-07-01T08:35:01.941505211Z" level=info msg="Container fc439bcd34449f41e1d0cbb6d6f776cde284ffe3680b8d7df5b14fd115652155: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:01.943663 containerd[1551]: time="2025-07-01T08:35:01.943620368Z" level=info msg="CreateContainer within sandbox \"8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 1 08:35:01.945137 containerd[1551]: time="2025-07-01T08:35:01.945013361Z" level=info msg="Container e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:01.955201 containerd[1551]: time="2025-07-01T08:35:01.955102503Z" level=info msg="Container 433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:01.973494 containerd[1551]: time="2025-07-01T08:35:01.973447519Z" level=info msg="CreateContainer within sandbox \"8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\"" Jul 1 08:35:01.975315 containerd[1551]: time="2025-07-01T08:35:01.975258275Z" level=info msg="StartContainer for \"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\"" Jul 1 08:35:01.981162 containerd[1551]: time="2025-07-01T08:35:01.981121602Z" level=info msg="connecting to shim 433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135" address="unix:///run/containerd/s/21759f0e7be244e7cf159961ece0d17b73d35ad02ad783a91484f5a9505d1d0c" protocol=ttrpc version=3 Jul 1 08:35:01.982574 containerd[1551]: time="2025-07-01T08:35:01.982500708Z" level=info msg="CreateContainer within sandbox \"8a05b52ad964c33b8554fba45416eea9e732d072c5b6dd92f52b3d386f86b60a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fc439bcd34449f41e1d0cbb6d6f776cde284ffe3680b8d7df5b14fd115652155\"" Jul 1 08:35:01.986769 containerd[1551]: time="2025-07-01T08:35:01.986730401Z" level=info msg="CreateContainer within sandbox \"40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\"" Jul 1 08:35:01.987616 containerd[1551]: time="2025-07-01T08:35:01.987580485Z" level=info msg="StartContainer for \"fc439bcd34449f41e1d0cbb6d6f776cde284ffe3680b8d7df5b14fd115652155\"" Jul 1 08:35:01.987899 containerd[1551]: time="2025-07-01T08:35:01.987880268Z" level=info msg="StartContainer for \"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\"" Jul 1 08:35:01.990269 containerd[1551]: time="2025-07-01T08:35:01.990222861Z" level=info msg="connecting to shim fc439bcd34449f41e1d0cbb6d6f776cde284ffe3680b8d7df5b14fd115652155" address="unix:///run/containerd/s/884f0602dc6862425d8207229a8abcb64003a9dd7adae5ebd7b0c79b2f983cae" protocol=ttrpc version=3 Jul 1 08:35:01.991547 containerd[1551]: time="2025-07-01T08:35:01.991519513Z" level=info msg="connecting to shim e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44" address="unix:///run/containerd/s/0163f98ad37e7e43c26b5c1ce4d28224590f7d4adc3c307b3c7c94cb4b012da4" protocol=ttrpc version=3 Jul 1 08:35:02.007419 systemd[1]: Started cri-containerd-433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135.scope - libcontainer container 433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135. Jul 1 08:35:02.033303 systemd[1]: Started cri-containerd-fc439bcd34449f41e1d0cbb6d6f776cde284ffe3680b8d7df5b14fd115652155.scope - libcontainer container fc439bcd34449f41e1d0cbb6d6f776cde284ffe3680b8d7df5b14fd115652155. Jul 1 08:35:02.038262 kubelet[2413]: E0701 08:35:02.038233 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 1 08:35:02.043065 systemd[1]: Started cri-containerd-e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44.scope - libcontainer container e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44. Jul 1 08:35:02.065992 kubelet[2413]: E0701 08:35:02.065895 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 1 08:35:02.195673 kubelet[2413]: E0701 08:35:02.195587 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 1 08:35:02.196253 kubelet[2413]: E0701 08:35:02.196201 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-9-s-39d8ad6622.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 1 08:35:02.220579 containerd[1551]: time="2025-07-01T08:35:02.220442298Z" level=info msg="StartContainer for \"fc439bcd34449f41e1d0cbb6d6f776cde284ffe3680b8d7df5b14fd115652155\" returns successfully" Jul 1 08:35:02.249005 containerd[1551]: time="2025-07-01T08:35:02.248954252Z" level=info msg="StartContainer for \"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\" returns successfully" Jul 1 08:35:02.274002 containerd[1551]: time="2025-07-01T08:35:02.273912471Z" level=info msg="StartContainer for \"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\" returns successfully" Jul 1 08:35:02.654522 kubelet[2413]: I0701 08:35:02.654257 2413 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:03.082118 kubelet[2413]: E0701 08:35:03.081372 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:03.195787 kubelet[2413]: E0701 08:35:03.195740 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:03.201712 kubelet[2413]: E0701 08:35:03.201426 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:04.114263 kubelet[2413]: E0701 08:35:04.114099 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:04.116429 kubelet[2413]: E0701 08:35:04.114565 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:04.116863 kubelet[2413]: E0701 08:35:04.116727 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:04.945144 kubelet[2413]: E0701 08:35:04.945073 2413 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:04.985172 kubelet[2413]: I0701 08:35:04.984962 2413 apiserver.go:52] "Watching apiserver" Jul 1 08:35:05.010275 kubelet[2413]: I0701 08:35:05.010226 2413 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 1 08:35:05.120273 kubelet[2413]: E0701 08:35:05.119835 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.125581 kubelet[2413]: E0701 08:35:05.124653 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.126570 kubelet[2413]: E0701 08:35:05.124989 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-s-39d8ad6622.novalocal\" not found" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.606715 kubelet[2413]: I0701 08:35:05.606565 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.636048 kubelet[2413]: I0701 08:35:05.634310 2413 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.656428 kubelet[2413]: E0701 08:35:05.656332 2413 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.656428 kubelet[2413]: I0701 08:35:05.656407 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.663800 kubelet[2413]: E0701 08:35:05.663243 2413 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.663800 kubelet[2413]: I0701 08:35:05.663309 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:05.675702 kubelet[2413]: E0701 08:35:05.675373 2413 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:06.899191 kubelet[2413]: I0701 08:35:06.898961 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:06.928139 kubelet[2413]: I0701 08:35:06.927613 2413 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 1 08:35:08.037342 systemd[1]: Reload requested from client PID 2694 ('systemctl') (unit session-9.scope)... Jul 1 08:35:08.038553 systemd[1]: Reloading... Jul 1 08:35:08.319173 zram_generator::config[2740]: No configuration found. Jul 1 08:35:08.502437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:35:08.680862 systemd[1]: Reloading finished in 641 ms. Jul 1 08:35:08.772448 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:35:08.785637 systemd[1]: kubelet.service: Deactivated successfully. Jul 1 08:35:08.786278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:35:08.786690 systemd[1]: kubelet.service: Consumed 2.097s CPU time, 133.2M memory peak. Jul 1 08:35:08.794838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:35:09.141309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:35:09.154512 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:35:09.351125 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:35:09.351962 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 1 08:35:09.351962 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:35:09.351962 kubelet[2803]: I0701 08:35:09.351636 2803 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:35:09.361035 kubelet[2803]: I0701 08:35:09.360989 2803 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 1 08:35:09.361035 kubelet[2803]: I0701 08:35:09.361018 2803 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:35:09.363614 kubelet[2803]: I0701 08:35:09.363097 2803 server.go:956] "Client rotation is on, will bootstrap in background" Jul 1 08:35:09.367143 kubelet[2803]: I0701 08:35:09.367112 2803 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 1 08:35:09.371261 kubelet[2803]: I0701 08:35:09.370724 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:35:09.378418 kubelet[2803]: I0701 08:35:09.378379 2803 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:35:09.383115 kubelet[2803]: I0701 08:35:09.383052 2803 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:35:09.383419 kubelet[2803]: I0701 08:35:09.383310 2803 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:35:09.383725 kubelet[2803]: I0701 08:35:09.383385 2803 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-9-s-39d8ad6622.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:35:09.384279 kubelet[2803]: I0701 08:35:09.383734 2803 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:35:09.384279 kubelet[2803]: I0701 08:35:09.383754 2803 container_manager_linux.go:303] "Creating device plugin manager" Jul 1 08:35:09.384279 kubelet[2803]: I0701 08:35:09.383904 2803 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:35:09.384279 kubelet[2803]: I0701 08:35:09.384241 2803 kubelet.go:480] "Attempting to sync node with API server" Jul 1 08:35:09.384279 kubelet[2803]: I0701 08:35:09.384272 2803 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:35:09.384765 kubelet[2803]: I0701 08:35:09.384325 2803 kubelet.go:386] "Adding apiserver pod source" Jul 1 08:35:09.384765 kubelet[2803]: I0701 08:35:09.384353 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:35:09.390086 kubelet[2803]: I0701 08:35:09.388744 2803 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:35:09.390718 kubelet[2803]: I0701 08:35:09.390681 2803 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 1 08:35:09.412527 kubelet[2803]: I0701 08:35:09.411188 2803 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 1 08:35:09.412527 kubelet[2803]: I0701 08:35:09.411272 2803 server.go:1289] "Started kubelet" Jul 1 08:35:09.421118 kubelet[2803]: I0701 08:35:09.418211 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:35:09.426448 kubelet[2803]: I0701 08:35:09.426395 2803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:35:09.427714 kubelet[2803]: I0701 08:35:09.427632 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:35:09.428731 kubelet[2803]: I0701 08:35:09.428396 2803 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:35:09.432657 kubelet[2803]: I0701 08:35:09.432589 2803 server.go:317] "Adding debug handlers to kubelet server" Jul 1 08:35:09.436133 kubelet[2803]: I0701 08:35:09.434623 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:35:09.436373 kubelet[2803]: I0701 08:35:09.436356 2803 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 1 08:35:09.437700 kubelet[2803]: I0701 08:35:09.437356 2803 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 1 08:35:09.443142 kubelet[2803]: I0701 08:35:09.440286 2803 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:35:09.443449 kubelet[2803]: I0701 08:35:09.443257 2803 factory.go:223] Registration of the systemd container factory successfully Jul 1 08:35:09.444416 kubelet[2803]: I0701 08:35:09.444381 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:35:09.450085 kubelet[2803]: E0701 08:35:09.447953 2803 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:35:09.452634 kubelet[2803]: I0701 08:35:09.452583 2803 factory.go:223] Registration of the containerd container factory successfully Jul 1 08:35:09.453184 sudo[2824]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 1 08:35:09.454684 sudo[2824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 1 08:35:09.468088 kubelet[2803]: I0701 08:35:09.467123 2803 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 1 08:35:09.473632 kubelet[2803]: I0701 08:35:09.473250 2803 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 1 08:35:09.473632 kubelet[2803]: I0701 08:35:09.473289 2803 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 1 08:35:09.473632 kubelet[2803]: I0701 08:35:09.473316 2803 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 1 08:35:09.473632 kubelet[2803]: I0701 08:35:09.473324 2803 kubelet.go:2436] "Starting kubelet main sync loop" Jul 1 08:35:09.473632 kubelet[2803]: E0701 08:35:09.473368 2803 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:35:09.573515 kubelet[2803]: E0701 08:35:09.573472 2803 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 1 08:35:09.578714 kubelet[2803]: I0701 08:35:09.578686 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 1 08:35:09.578714 kubelet[2803]: I0701 08:35:09.578705 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 1 08:35:09.578818 kubelet[2803]: I0701 08:35:09.578755 2803 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:35:09.579113 kubelet[2803]: I0701 08:35:09.579091 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 1 08:35:09.579172 kubelet[2803]: I0701 08:35:09.579111 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 1 08:35:09.579172 kubelet[2803]: I0701 08:35:09.579140 2803 policy_none.go:49] "None policy: Start" Jul 1 08:35:09.579172 kubelet[2803]: I0701 08:35:09.579163 2803 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 1 08:35:09.579318 kubelet[2803]: I0701 08:35:09.579183 2803 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:35:09.580732 kubelet[2803]: I0701 08:35:09.579458 2803 state_mem.go:75] "Updated machine memory state" Jul 1 08:35:09.589083 kubelet[2803]: E0701 08:35:09.588664 2803 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 1 08:35:09.589083 kubelet[2803]: I0701 08:35:09.588934 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:35:09.589083 kubelet[2803]: I0701 08:35:09.588956 2803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:35:09.589846 kubelet[2803]: I0701 08:35:09.589461 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:35:09.597544 kubelet[2803]: E0701 08:35:09.597521 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 1 08:35:09.699315 kubelet[2803]: I0701 08:35:09.699283 2803 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.717696 kubelet[2803]: I0701 08:35:09.717608 2803 kubelet_node_status.go:124] "Node was previously registered" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.718014 kubelet[2803]: I0701 08:35:09.717927 2803 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.826270 kubelet[2803]: I0701 08:35:09.826205 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.826526 kubelet[2803]: I0701 08:35:09.826497 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.826734 kubelet[2803]: I0701 08:35:09.826296 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.843345 kubelet[2803]: I0701 08:35:09.843312 2803 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 1 08:35:09.843679 kubelet[2803]: I0701 08:35:09.843371 2803 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 1 08:35:09.847088 kubelet[2803]: I0701 08:35:09.846988 2803 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 1 08:35:09.847229 kubelet[2803]: E0701 08:35:09.847051 2803 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.925870 kubelet[2803]: I0701 08:35:09.925763 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-ca-certs\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.926200 kubelet[2803]: I0701 08:35:09.925910 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.926200 kubelet[2803]: I0701 08:35:09.926031 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.926200 kubelet[2803]: I0701 08:35:09.926094 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.926200 kubelet[2803]: I0701 08:35:09.926152 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19d5fd3b369c71ac2860ad96d108053b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"19d5fd3b369c71ac2860ad96d108053b\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.926385 kubelet[2803]: I0701 08:35:09.926179 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee2b7941fabf6fbe1f7f2e22150b16a6-kubeconfig\") pod \"kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"ee2b7941fabf6fbe1f7f2e22150b16a6\") " pod="kube-system/kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.926385 kubelet[2803]: I0701 08:35:09.926224 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/096de24328c9676aab98b000546c4460-ca-certs\") pod \"kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"096de24328c9676aab98b000546c4460\") " pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.926385 kubelet[2803]: I0701 08:35:09.926253 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/096de24328c9676aab98b000546c4460-k8s-certs\") pod \"kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"096de24328c9676aab98b000546c4460\") " pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:09.926385 kubelet[2803]: I0701 08:35:09.926278 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/096de24328c9676aab98b000546c4460-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal\" (UID: \"096de24328c9676aab98b000546c4460\") " pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" Jul 1 08:35:10.429567 kubelet[2803]: I0701 08:35:10.429018 2803 apiserver.go:52] "Watching apiserver" Jul 1 08:35:10.442086 kubelet[2803]: I0701 08:35:10.441233 2803 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 1 08:35:10.456537 sudo[2824]: pam_unix(sudo:session): session closed for user root Jul 1 08:35:10.589729 kubelet[2803]: I0701 08:35:10.589439 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal" podStartSLOduration=4.589384673 podStartE2EDuration="4.589384673s" podCreationTimestamp="2025-07-01 08:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:35:10.587231821 +0000 UTC m=+1.419887823" watchObservedRunningTime="2025-07-01 08:35:10.589384673 +0000 UTC m=+1.422040635" Jul 1 08:35:10.642247 kubelet[2803]: I0701 08:35:10.642170 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999-9-9-s-39d8ad6622.novalocal" podStartSLOduration=1.6421509429999999 podStartE2EDuration="1.642150943s" podCreationTimestamp="2025-07-01 08:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:35:10.613460363 +0000 UTC m=+1.446116345" watchObservedRunningTime="2025-07-01 08:35:10.642150943 +0000 UTC m=+1.474806905" Jul 1 08:35:10.666197 kubelet[2803]: I0701 08:35:10.666112 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999-9-9-s-39d8ad6622.novalocal" podStartSLOduration=1.666068519 podStartE2EDuration="1.666068519s" podCreationTimestamp="2025-07-01 08:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:35:10.642617784 +0000 UTC m=+1.475273776" watchObservedRunningTime="2025-07-01 08:35:10.666068519 +0000 UTC m=+1.498724501" Jul 1 08:35:12.865379 sudo[1812]: pam_unix(sudo:session): session closed for user root Jul 1 08:35:13.012647 sshd[1811]: Connection closed by 172.24.4.1 port 56602 Jul 1 08:35:13.020893 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Jul 1 08:35:13.040872 systemd[1]: sshd@6-172.24.4.49:22-172.24.4.1:56602.service: Deactivated successfully. Jul 1 08:35:13.054674 systemd[1]: session-9.scope: Deactivated successfully. Jul 1 08:35:13.056024 systemd[1]: session-9.scope: Consumed 10.867s CPU time, 275.8M memory peak. Jul 1 08:35:13.070313 systemd-logind[1530]: Session 9 logged out. Waiting for processes to exit. Jul 1 08:35:13.078185 systemd-logind[1530]: Removed session 9. Jul 1 08:35:13.299565 kubelet[2803]: I0701 08:35:13.299507 2803 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 1 08:35:13.301635 containerd[1551]: time="2025-07-01T08:35:13.301480364Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 1 08:35:13.302769 kubelet[2803]: I0701 08:35:13.302643 2803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 1 08:35:14.178165 systemd[1]: Created slice kubepods-burstable-pod26f99ec5_703e_4073_b4cc_a22c44f1ac1a.slice - libcontainer container kubepods-burstable-pod26f99ec5_703e_4073_b4cc_a22c44f1ac1a.slice. Jul 1 08:35:14.192113 systemd[1]: Created slice kubepods-besteffort-poda66edb51_b61c_4ff3_80ee_bf651e73c0df.slice - libcontainer container kubepods-besteffort-poda66edb51_b61c_4ff3_80ee_bf651e73c0df.slice. Jul 1 08:35:14.270323 kubelet[2803]: I0701 08:35:14.270266 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a66edb51-b61c-4ff3-80ee-bf651e73c0df-kube-proxy\") pod \"kube-proxy-wx799\" (UID: \"a66edb51-b61c-4ff3-80ee-bf651e73c0df\") " pod="kube-system/kube-proxy-wx799" Jul 1 08:35:14.270498 kubelet[2803]: I0701 08:35:14.270326 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-run\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270498 kubelet[2803]: I0701 08:35:14.270362 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-cgroup\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270498 kubelet[2803]: I0701 08:35:14.270401 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cni-path\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270498 kubelet[2803]: I0701 08:35:14.270427 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-config-path\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270498 kubelet[2803]: I0701 08:35:14.270456 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a66edb51-b61c-4ff3-80ee-bf651e73c0df-xtables-lock\") pod \"kube-proxy-wx799\" (UID: \"a66edb51-b61c-4ff3-80ee-bf651e73c0df\") " pod="kube-system/kube-proxy-wx799" Jul 1 08:35:14.270498 kubelet[2803]: I0701 08:35:14.270480 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-etc-cni-netd\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270714 kubelet[2803]: I0701 08:35:14.270503 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-lib-modules\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270714 kubelet[2803]: I0701 08:35:14.270542 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-host-proc-sys-net\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270714 kubelet[2803]: I0701 08:35:14.270562 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-host-proc-sys-kernel\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270714 kubelet[2803]: I0701 08:35:14.270580 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-hubble-tls\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270714 kubelet[2803]: I0701 08:35:14.270620 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a66edb51-b61c-4ff3-80ee-bf651e73c0df-lib-modules\") pod \"kube-proxy-wx799\" (UID: \"a66edb51-b61c-4ff3-80ee-bf651e73c0df\") " pod="kube-system/kube-proxy-wx799" Jul 1 08:35:14.270714 kubelet[2803]: I0701 08:35:14.270676 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-bpf-maps\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270944 kubelet[2803]: I0701 08:35:14.270700 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-hostproc\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270944 kubelet[2803]: I0701 08:35:14.270741 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-xtables-lock\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270944 kubelet[2803]: I0701 08:35:14.270769 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44z4l\" (UniqueName: \"kubernetes.io/projected/a66edb51-b61c-4ff3-80ee-bf651e73c0df-kube-api-access-44z4l\") pod \"kube-proxy-wx799\" (UID: \"a66edb51-b61c-4ff3-80ee-bf651e73c0df\") " pod="kube-system/kube-proxy-wx799" Jul 1 08:35:14.270944 kubelet[2803]: I0701 08:35:14.270810 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-clustermesh-secrets\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.270944 kubelet[2803]: I0701 08:35:14.270842 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wbmj\" (UniqueName: \"kubernetes.io/projected/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-kube-api-access-8wbmj\") pod \"cilium-9cdht\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " pod="kube-system/cilium-9cdht" Jul 1 08:35:14.487594 containerd[1551]: time="2025-07-01T08:35:14.487519861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cdht,Uid:26f99ec5-703e-4073-b4cc-a22c44f1ac1a,Namespace:kube-system,Attempt:0,}" Jul 1 08:35:14.505639 containerd[1551]: time="2025-07-01T08:35:14.505559730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wx799,Uid:a66edb51-b61c-4ff3-80ee-bf651e73c0df,Namespace:kube-system,Attempt:0,}" Jul 1 08:35:14.581819 systemd[1]: Created slice kubepods-besteffort-pod489c1c0b_9a01_4d83_a65f_9e542bbf37ba.slice - libcontainer container kubepods-besteffort-pod489c1c0b_9a01_4d83_a65f_9e542bbf37ba.slice. Jul 1 08:35:14.590183 containerd[1551]: time="2025-07-01T08:35:14.590109202Z" level=info msg="connecting to shim 5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1" address="unix:///run/containerd/s/8b3eb0f21be6251f07ab8e5c37a85d5dd310ccfe93d246dba4cec7d48b916a01" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:35:14.599683 containerd[1551]: time="2025-07-01T08:35:14.599605716Z" level=info msg="connecting to shim 2d97829014ec81ecff96fdacd2be5e927a861748ea0992623e5a4d85554dd9c3" address="unix:///run/containerd/s/99044dffd1a904799abee25bee27174568f0ed1fd2e17bf75e2c498a783ab3a2" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:35:14.718790 kubelet[2803]: I0701 08:35:14.718701 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/489c1c0b-9a01-4d83-a65f-9e542bbf37ba-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-htg9w\" (UID: \"489c1c0b-9a01-4d83-a65f-9e542bbf37ba\") " pod="kube-system/cilium-operator-6c4d7847fc-htg9w" Jul 1 08:35:14.718790 kubelet[2803]: I0701 08:35:14.718766 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55ll7\" (UniqueName: \"kubernetes.io/projected/489c1c0b-9a01-4d83-a65f-9e542bbf37ba-kube-api-access-55ll7\") pod \"cilium-operator-6c4d7847fc-htg9w\" (UID: \"489c1c0b-9a01-4d83-a65f-9e542bbf37ba\") " pod="kube-system/cilium-operator-6c4d7847fc-htg9w" Jul 1 08:35:14.721403 systemd[1]: Started cri-containerd-5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1.scope - libcontainer container 5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1. Jul 1 08:35:14.739244 systemd[1]: Started cri-containerd-2d97829014ec81ecff96fdacd2be5e927a861748ea0992623e5a4d85554dd9c3.scope - libcontainer container 2d97829014ec81ecff96fdacd2be5e927a861748ea0992623e5a4d85554dd9c3. Jul 1 08:35:14.826872 containerd[1551]: time="2025-07-01T08:35:14.826818792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cdht,Uid:26f99ec5-703e-4073-b4cc-a22c44f1ac1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\"" Jul 1 08:35:14.838641 containerd[1551]: time="2025-07-01T08:35:14.838385730Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 1 08:35:14.856833 containerd[1551]: time="2025-07-01T08:35:14.856783437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wx799,Uid:a66edb51-b61c-4ff3-80ee-bf651e73c0df,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d97829014ec81ecff96fdacd2be5e927a861748ea0992623e5a4d85554dd9c3\"" Jul 1 08:35:14.872183 containerd[1551]: time="2025-07-01T08:35:14.872139087Z" level=info msg="CreateContainer within sandbox \"2d97829014ec81ecff96fdacd2be5e927a861748ea0992623e5a4d85554dd9c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 1 08:35:14.888861 containerd[1551]: time="2025-07-01T08:35:14.888815111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-htg9w,Uid:489c1c0b-9a01-4d83-a65f-9e542bbf37ba,Namespace:kube-system,Attempt:0,}" Jul 1 08:35:14.912666 containerd[1551]: time="2025-07-01T08:35:14.912614610Z" level=info msg="Container 172a3736c7bde5648c9ea3f94b7878cc30bb973883e5b56644aa0e97f545b023: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:14.961616 containerd[1551]: time="2025-07-01T08:35:14.960348379Z" level=info msg="CreateContainer within sandbox \"2d97829014ec81ecff96fdacd2be5e927a861748ea0992623e5a4d85554dd9c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"172a3736c7bde5648c9ea3f94b7878cc30bb973883e5b56644aa0e97f545b023\"" Jul 1 08:35:14.962590 containerd[1551]: time="2025-07-01T08:35:14.962541563Z" level=info msg="StartContainer for \"172a3736c7bde5648c9ea3f94b7878cc30bb973883e5b56644aa0e97f545b023\"" Jul 1 08:35:14.968508 containerd[1551]: time="2025-07-01T08:35:14.968404055Z" level=info msg="connecting to shim 172a3736c7bde5648c9ea3f94b7878cc30bb973883e5b56644aa0e97f545b023" address="unix:///run/containerd/s/99044dffd1a904799abee25bee27174568f0ed1fd2e17bf75e2c498a783ab3a2" protocol=ttrpc version=3 Jul 1 08:35:14.973122 containerd[1551]: time="2025-07-01T08:35:14.972703949Z" level=info msg="connecting to shim bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110" address="unix:///run/containerd/s/5da55d9a0ef3e4dcc7c9c05561194aab0a94dc97c1341c5d5e1e6892cb072eef" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:35:14.996299 systemd[1]: Started cri-containerd-172a3736c7bde5648c9ea3f94b7878cc30bb973883e5b56644aa0e97f545b023.scope - libcontainer container 172a3736c7bde5648c9ea3f94b7878cc30bb973883e5b56644aa0e97f545b023. Jul 1 08:35:15.021202 systemd[1]: Started cri-containerd-bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110.scope - libcontainer container bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110. Jul 1 08:35:15.088859 containerd[1551]: time="2025-07-01T08:35:15.088744645Z" level=info msg="StartContainer for \"172a3736c7bde5648c9ea3f94b7878cc30bb973883e5b56644aa0e97f545b023\" returns successfully" Jul 1 08:35:15.101996 containerd[1551]: time="2025-07-01T08:35:15.101937887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-htg9w,Uid:489c1c0b-9a01-4d83-a65f-9e542bbf37ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\"" Jul 1 08:35:15.561652 kubelet[2803]: I0701 08:35:15.561481 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wx799" podStartSLOduration=1.561447539 podStartE2EDuration="1.561447539s" podCreationTimestamp="2025-07-01 08:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:35:15.56117522 +0000 UTC m=+6.393831213" watchObservedRunningTime="2025-07-01 08:35:15.561447539 +0000 UTC m=+6.394103511" Jul 1 08:35:22.429602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount427502163.mount: Deactivated successfully. Jul 1 08:35:26.302094 containerd[1551]: time="2025-07-01T08:35:26.300838877Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:35:26.304580 containerd[1551]: time="2025-07-01T08:35:26.304532268Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 1 08:35:26.306935 containerd[1551]: time="2025-07-01T08:35:26.306827644Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:35:26.312809 containerd[1551]: time="2025-07-01T08:35:26.312536968Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.473391158s" Jul 1 08:35:26.312809 containerd[1551]: time="2025-07-01T08:35:26.312621766Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 1 08:35:26.316049 containerd[1551]: time="2025-07-01T08:35:26.315983587Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 1 08:35:26.332804 containerd[1551]: time="2025-07-01T08:35:26.331797069Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 1 08:35:26.358106 containerd[1551]: time="2025-07-01T08:35:26.357325660Z" level=info msg="Container d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:26.381824 containerd[1551]: time="2025-07-01T08:35:26.381738834Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\"" Jul 1 08:35:26.383085 containerd[1551]: time="2025-07-01T08:35:26.383020854Z" level=info msg="StartContainer for \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\"" Jul 1 08:35:26.386093 containerd[1551]: time="2025-07-01T08:35:26.386030235Z" level=info msg="connecting to shim d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f" address="unix:///run/containerd/s/8b3eb0f21be6251f07ab8e5c37a85d5dd310ccfe93d246dba4cec7d48b916a01" protocol=ttrpc version=3 Jul 1 08:35:26.478575 systemd[1]: Started cri-containerd-d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f.scope - libcontainer container d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f. Jul 1 08:35:26.656520 containerd[1551]: time="2025-07-01T08:35:26.656271286Z" level=info msg="StartContainer for \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\" returns successfully" Jul 1 08:35:26.673590 systemd[1]: cri-containerd-d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f.scope: Deactivated successfully. Jul 1 08:35:26.679788 containerd[1551]: time="2025-07-01T08:35:26.679702713Z" level=info msg="received exit event container_id:\"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\" id:\"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\" pid:3227 exited_at:{seconds:1751358926 nanos:677956075}" Jul 1 08:35:26.680047 containerd[1551]: time="2025-07-01T08:35:26.679740844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\" id:\"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\" pid:3227 exited_at:{seconds:1751358926 nanos:677956075}" Jul 1 08:35:26.704041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f-rootfs.mount: Deactivated successfully. Jul 1 08:35:28.666267 containerd[1551]: time="2025-07-01T08:35:28.665743214Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 1 08:35:28.743280 containerd[1551]: time="2025-07-01T08:35:28.743117053Z" level=info msg="Container 0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:28.748750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853866822.mount: Deactivated successfully. Jul 1 08:35:28.762094 containerd[1551]: time="2025-07-01T08:35:28.761854856Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\"" Jul 1 08:35:28.765303 containerd[1551]: time="2025-07-01T08:35:28.765255071Z" level=info msg="StartContainer for \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\"" Jul 1 08:35:28.768959 containerd[1551]: time="2025-07-01T08:35:28.768875467Z" level=info msg="connecting to shim 0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372" address="unix:///run/containerd/s/8b3eb0f21be6251f07ab8e5c37a85d5dd310ccfe93d246dba4cec7d48b916a01" protocol=ttrpc version=3 Jul 1 08:35:28.824386 systemd[1]: Started cri-containerd-0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372.scope - libcontainer container 0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372. Jul 1 08:35:28.962785 containerd[1551]: time="2025-07-01T08:35:28.962678178Z" level=info msg="StartContainer for \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\" returns successfully" Jul 1 08:35:28.981473 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 1 08:35:28.982135 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:35:29.028051 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:35:29.031548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:35:29.038161 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 1 08:35:29.041040 systemd[1]: cri-containerd-0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372.scope: Deactivated successfully. Jul 1 08:35:29.055569 containerd[1551]: time="2025-07-01T08:35:29.053816223Z" level=info msg="received exit event container_id:\"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\" id:\"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\" pid:3272 exited_at:{seconds:1751358929 nanos:47675938}" Jul 1 08:35:29.057390 containerd[1551]: time="2025-07-01T08:35:29.057358615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\" id:\"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\" pid:3272 exited_at:{seconds:1751358929 nanos:47675938}" Jul 1 08:35:29.082860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:35:29.670444 containerd[1551]: time="2025-07-01T08:35:29.670226476Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 1 08:35:29.701132 containerd[1551]: time="2025-07-01T08:35:29.700670939Z" level=info msg="Container 3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:29.717027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372-rootfs.mount: Deactivated successfully. Jul 1 08:35:29.749274 containerd[1551]: time="2025-07-01T08:35:29.749231125Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\"" Jul 1 08:35:29.750820 containerd[1551]: time="2025-07-01T08:35:29.750789011Z" level=info msg="StartContainer for \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\"" Jul 1 08:35:29.757192 containerd[1551]: time="2025-07-01T08:35:29.756688786Z" level=info msg="connecting to shim 3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4" address="unix:///run/containerd/s/8b3eb0f21be6251f07ab8e5c37a85d5dd310ccfe93d246dba4cec7d48b916a01" protocol=ttrpc version=3 Jul 1 08:35:29.840683 systemd[1]: Started cri-containerd-3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4.scope - libcontainer container 3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4. Jul 1 08:35:29.942612 systemd[1]: cri-containerd-3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4.scope: Deactivated successfully. Jul 1 08:35:29.945867 containerd[1551]: time="2025-07-01T08:35:29.945814767Z" level=info msg="StartContainer for \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\" returns successfully" Jul 1 08:35:29.950444 containerd[1551]: time="2025-07-01T08:35:29.950328938Z" level=info msg="received exit event container_id:\"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\" id:\"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\" pid:3332 exited_at:{seconds:1751358929 nanos:949745936}" Jul 1 08:35:29.951026 containerd[1551]: time="2025-07-01T08:35:29.950988463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\" id:\"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\" pid:3332 exited_at:{seconds:1751358929 nanos:949745936}" Jul 1 08:35:29.997517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4-rootfs.mount: Deactivated successfully. Jul 1 08:35:30.569863 containerd[1551]: time="2025-07-01T08:35:30.569804111Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:35:30.571404 containerd[1551]: time="2025-07-01T08:35:30.571379831Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 1 08:35:30.572888 containerd[1551]: time="2025-07-01T08:35:30.572862026Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:35:30.574143 containerd[1551]: time="2025-07-01T08:35:30.574116936Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.257835351s" Jul 1 08:35:30.574313 containerd[1551]: time="2025-07-01T08:35:30.574153324Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 1 08:35:30.585912 containerd[1551]: time="2025-07-01T08:35:30.585855714Z" level=info msg="CreateContainer within sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 1 08:35:30.601734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839730790.mount: Deactivated successfully. Jul 1 08:35:30.602412 containerd[1551]: time="2025-07-01T08:35:30.602373128Z" level=info msg="Container 68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:30.617701 containerd[1551]: time="2025-07-01T08:35:30.617650802Z" level=info msg="CreateContainer within sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\"" Jul 1 08:35:30.618756 containerd[1551]: time="2025-07-01T08:35:30.618726727Z" level=info msg="StartContainer for \"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\"" Jul 1 08:35:30.620617 containerd[1551]: time="2025-07-01T08:35:30.620578043Z" level=info msg="connecting to shim 68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a" address="unix:///run/containerd/s/5da55d9a0ef3e4dcc7c9c05561194aab0a94dc97c1341c5d5e1e6892cb072eef" protocol=ttrpc version=3 Jul 1 08:35:30.657451 systemd[1]: Started cri-containerd-68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a.scope - libcontainer container 68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a. Jul 1 08:35:30.692331 containerd[1551]: time="2025-07-01T08:35:30.691723585Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 1 08:35:30.726540 containerd[1551]: time="2025-07-01T08:35:30.726206866Z" level=info msg="Container 950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:30.732691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200014492.mount: Deactivated successfully. Jul 1 08:35:30.737265 containerd[1551]: time="2025-07-01T08:35:30.737219595Z" level=info msg="StartContainer for \"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\" returns successfully" Jul 1 08:35:30.753830 containerd[1551]: time="2025-07-01T08:35:30.753784608Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\"" Jul 1 08:35:30.754716 containerd[1551]: time="2025-07-01T08:35:30.754512922Z" level=info msg="StartContainer for \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\"" Jul 1 08:35:30.757997 containerd[1551]: time="2025-07-01T08:35:30.757944327Z" level=info msg="connecting to shim 950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f" address="unix:///run/containerd/s/8b3eb0f21be6251f07ab8e5c37a85d5dd310ccfe93d246dba4cec7d48b916a01" protocol=ttrpc version=3 Jul 1 08:35:30.798510 systemd[1]: Started cri-containerd-950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f.scope - libcontainer container 950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f. Jul 1 08:35:30.841426 systemd[1]: cri-containerd-950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f.scope: Deactivated successfully. Jul 1 08:35:30.848039 containerd[1551]: time="2025-07-01T08:35:30.847996007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\" id:\"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\" pid:3411 exited_at:{seconds:1751358930 nanos:847350849}" Jul 1 08:35:30.848477 containerd[1551]: time="2025-07-01T08:35:30.848255693Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26f99ec5_703e_4073_b4cc_a22c44f1ac1a.slice/cri-containerd-950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f.scope/memory.events\": no such file or directory" Jul 1 08:35:30.854335 containerd[1551]: time="2025-07-01T08:35:30.853403090Z" level=info msg="received exit event container_id:\"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\" id:\"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\" pid:3411 exited_at:{seconds:1751358930 nanos:847350849}" Jul 1 08:35:30.871819 containerd[1551]: time="2025-07-01T08:35:30.871768765Z" level=info msg="StartContainer for \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\" returns successfully" Jul 1 08:35:30.898811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f-rootfs.mount: Deactivated successfully. Jul 1 08:35:31.712919 containerd[1551]: time="2025-07-01T08:35:31.712824438Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 1 08:35:31.750400 kubelet[2803]: I0701 08:35:31.750302 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-htg9w" podStartSLOduration=2.278876534 podStartE2EDuration="17.750189072s" podCreationTimestamp="2025-07-01 08:35:14 +0000 UTC" firstStartedPulling="2025-07-01 08:35:15.104139006 +0000 UTC m=+5.936794968" lastFinishedPulling="2025-07-01 08:35:30.575451534 +0000 UTC m=+21.408107506" observedRunningTime="2025-07-01 08:35:31.749382672 +0000 UTC m=+22.582038654" watchObservedRunningTime="2025-07-01 08:35:31.750189072 +0000 UTC m=+22.582845034" Jul 1 08:35:31.759221 containerd[1551]: time="2025-07-01T08:35:31.759163146Z" level=info msg="Container 224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:31.760290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068474622.mount: Deactivated successfully. Jul 1 08:35:31.786810 containerd[1551]: time="2025-07-01T08:35:31.786749800Z" level=info msg="CreateContainer within sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\"" Jul 1 08:35:31.788176 containerd[1551]: time="2025-07-01T08:35:31.788141446Z" level=info msg="StartContainer for \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\"" Jul 1 08:35:31.790571 containerd[1551]: time="2025-07-01T08:35:31.790522524Z" level=info msg="connecting to shim 224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f" address="unix:///run/containerd/s/8b3eb0f21be6251f07ab8e5c37a85d5dd310ccfe93d246dba4cec7d48b916a01" protocol=ttrpc version=3 Jul 1 08:35:31.838348 systemd[1]: Started cri-containerd-224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f.scope - libcontainer container 224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f. Jul 1 08:35:31.960882 containerd[1551]: time="2025-07-01T08:35:31.960841159Z" level=info msg="StartContainer for \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" returns successfully" Jul 1 08:35:32.344184 containerd[1551]: time="2025-07-01T08:35:32.344118734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" id:\"c47554a59e7ef1b99a3d6aa6a27ceefc329e72fae13e90759f21a1899d626ad2\" pid:3481 exited_at:{seconds:1751358932 nanos:342467542}" Jul 1 08:35:32.356800 kubelet[2803]: I0701 08:35:32.356759 2803 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 1 08:35:32.425423 systemd[1]: Created slice kubepods-burstable-pod1cb3f393_d9f2_46aa_bcee_9084eaa89a06.slice - libcontainer container kubepods-burstable-pod1cb3f393_d9f2_46aa_bcee_9084eaa89a06.slice. Jul 1 08:35:32.435814 systemd[1]: Created slice kubepods-burstable-pod79547f0c_31b5_4b7b_b62a_0df562a1e3a5.slice - libcontainer container kubepods-burstable-pod79547f0c_31b5_4b7b_b62a_0df562a1e3a5.slice. Jul 1 08:35:32.540470 kubelet[2803]: I0701 08:35:32.540397 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cb3f393-d9f2-46aa-bcee-9084eaa89a06-config-volume\") pod \"coredns-674b8bbfcf-h9nvb\" (UID: \"1cb3f393-d9f2-46aa-bcee-9084eaa89a06\") " pod="kube-system/coredns-674b8bbfcf-h9nvb" Jul 1 08:35:32.540470 kubelet[2803]: I0701 08:35:32.540464 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d54b\" (UniqueName: \"kubernetes.io/projected/1cb3f393-d9f2-46aa-bcee-9084eaa89a06-kube-api-access-2d54b\") pod \"coredns-674b8bbfcf-h9nvb\" (UID: \"1cb3f393-d9f2-46aa-bcee-9084eaa89a06\") " pod="kube-system/coredns-674b8bbfcf-h9nvb" Jul 1 08:35:32.540681 kubelet[2803]: I0701 08:35:32.540506 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79547f0c-31b5-4b7b-b62a-0df562a1e3a5-config-volume\") pod \"coredns-674b8bbfcf-sb9xv\" (UID: \"79547f0c-31b5-4b7b-b62a-0df562a1e3a5\") " pod="kube-system/coredns-674b8bbfcf-sb9xv" Jul 1 08:35:32.540681 kubelet[2803]: I0701 08:35:32.540534 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwt9b\" (UniqueName: \"kubernetes.io/projected/79547f0c-31b5-4b7b-b62a-0df562a1e3a5-kube-api-access-kwt9b\") pod \"coredns-674b8bbfcf-sb9xv\" (UID: \"79547f0c-31b5-4b7b-b62a-0df562a1e3a5\") " pod="kube-system/coredns-674b8bbfcf-sb9xv" Jul 1 08:35:32.730457 containerd[1551]: time="2025-07-01T08:35:32.730362963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9nvb,Uid:1cb3f393-d9f2-46aa-bcee-9084eaa89a06,Namespace:kube-system,Attempt:0,}" Jul 1 08:35:32.761378 containerd[1551]: time="2025-07-01T08:35:32.760356971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sb9xv,Uid:79547f0c-31b5-4b7b-b62a-0df562a1e3a5,Namespace:kube-system,Attempt:0,}" Jul 1 08:35:32.781304 kubelet[2803]: I0701 08:35:32.779021 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9cdht" podStartSLOduration=7.300988016 podStartE2EDuration="18.778997688s" podCreationTimestamp="2025-07-01 08:35:14 +0000 UTC" firstStartedPulling="2025-07-01 08:35:14.837451537 +0000 UTC m=+5.670107499" lastFinishedPulling="2025-07-01 08:35:26.315461129 +0000 UTC m=+17.148117171" observedRunningTime="2025-07-01 08:35:32.749748846 +0000 UTC m=+23.582404818" watchObservedRunningTime="2025-07-01 08:35:32.778997688 +0000 UTC m=+23.611653651" Jul 1 08:35:34.795920 systemd-networkd[1438]: cilium_host: Link UP Jul 1 08:35:34.797746 systemd-networkd[1438]: cilium_net: Link UP Jul 1 08:35:34.798334 systemd-networkd[1438]: cilium_net: Gained carrier Jul 1 08:35:34.798835 systemd-networkd[1438]: cilium_host: Gained carrier Jul 1 08:35:34.944775 systemd-networkd[1438]: cilium_vxlan: Link UP Jul 1 08:35:34.944785 systemd-networkd[1438]: cilium_vxlan: Gained carrier Jul 1 08:35:35.097419 systemd-networkd[1438]: cilium_host: Gained IPv6LL Jul 1 08:35:35.450400 kernel: NET: Registered PF_ALG protocol family Jul 1 08:35:35.729335 systemd-networkd[1438]: cilium_net: Gained IPv6LL Jul 1 08:35:36.723226 systemd-networkd[1438]: lxc_health: Link UP Jul 1 08:35:36.734047 systemd-networkd[1438]: lxc_health: Gained carrier Jul 1 08:35:36.817277 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL Jul 1 08:35:37.302271 systemd-networkd[1438]: lxc929d23e16f93: Link UP Jul 1 08:35:37.308116 kernel: eth0: renamed from tmp5a5c5 Jul 1 08:35:37.317198 systemd-networkd[1438]: lxc929d23e16f93: Gained carrier Jul 1 08:35:37.377386 systemd-networkd[1438]: lxcdbd28efac18e: Link UP Jul 1 08:35:37.384112 kernel: eth0: renamed from tmp6e676 Jul 1 08:35:37.387684 systemd-networkd[1438]: lxcdbd28efac18e: Gained carrier Jul 1 08:35:38.545889 systemd-networkd[1438]: lxc929d23e16f93: Gained IPv6LL Jul 1 08:35:38.801520 systemd-networkd[1438]: lxc_health: Gained IPv6LL Jul 1 08:35:38.993501 systemd-networkd[1438]: lxcdbd28efac18e: Gained IPv6LL Jul 1 08:35:42.742177 containerd[1551]: time="2025-07-01T08:35:42.741683852Z" level=info msg="connecting to shim 6e67640fccca11a562e771d6ab135f98c248c56961bb54065a32ae11137388be" address="unix:///run/containerd/s/252cc004b058afc5482eebf3ab57b83994fbe945a2c93a2c0d8fd888749a7337" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:35:42.828400 systemd[1]: Started cri-containerd-6e67640fccca11a562e771d6ab135f98c248c56961bb54065a32ae11137388be.scope - libcontainer container 6e67640fccca11a562e771d6ab135f98c248c56961bb54065a32ae11137388be. Jul 1 08:35:42.855114 containerd[1551]: time="2025-07-01T08:35:42.851753994Z" level=info msg="connecting to shim 5a5c516d6d39caf73c2468c4e665d518ee7b6b819bcfac433e49bb06633578e5" address="unix:///run/containerd/s/49859cf4f2577341f296b976cc75908cc4970ecd5cddd04733284bbd574dbf20" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:35:42.907470 systemd[1]: Started cri-containerd-5a5c516d6d39caf73c2468c4e665d518ee7b6b819bcfac433e49bb06633578e5.scope - libcontainer container 5a5c516d6d39caf73c2468c4e665d518ee7b6b819bcfac433e49bb06633578e5. Jul 1 08:35:42.963970 containerd[1551]: time="2025-07-01T08:35:42.963913932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sb9xv,Uid:79547f0c-31b5-4b7b-b62a-0df562a1e3a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e67640fccca11a562e771d6ab135f98c248c56961bb54065a32ae11137388be\"" Jul 1 08:35:42.986193 containerd[1551]: time="2025-07-01T08:35:42.984928423Z" level=info msg="CreateContainer within sandbox \"6e67640fccca11a562e771d6ab135f98c248c56961bb54065a32ae11137388be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:35:42.987123 containerd[1551]: time="2025-07-01T08:35:42.987031705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9nvb,Uid:1cb3f393-d9f2-46aa-bcee-9084eaa89a06,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a5c516d6d39caf73c2468c4e665d518ee7b6b819bcfac433e49bb06633578e5\"" Jul 1 08:35:43.000566 containerd[1551]: time="2025-07-01T08:35:42.999042924Z" level=info msg="CreateContainer within sandbox \"5a5c516d6d39caf73c2468c4e665d518ee7b6b819bcfac433e49bb06633578e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:35:43.018207 containerd[1551]: time="2025-07-01T08:35:43.018127691Z" level=info msg="Container a7d4f5eb58b1b3318fc85de3999f7e2e008ffef14241270b840964c469699a3f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:43.041978 containerd[1551]: time="2025-07-01T08:35:43.041922695Z" level=info msg="CreateContainer within sandbox \"6e67640fccca11a562e771d6ab135f98c248c56961bb54065a32ae11137388be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7d4f5eb58b1b3318fc85de3999f7e2e008ffef14241270b840964c469699a3f\"" Jul 1 08:35:43.042470 containerd[1551]: time="2025-07-01T08:35:43.042430326Z" level=info msg="Container ee8d332214df6da8bdce493488be31cbc284a124fed54732f00b2035e9dd20ff: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:35:43.043344 containerd[1551]: time="2025-07-01T08:35:43.043318611Z" level=info msg="StartContainer for \"a7d4f5eb58b1b3318fc85de3999f7e2e008ffef14241270b840964c469699a3f\"" Jul 1 08:35:43.045561 containerd[1551]: time="2025-07-01T08:35:43.045520677Z" level=info msg="connecting to shim a7d4f5eb58b1b3318fc85de3999f7e2e008ffef14241270b840964c469699a3f" address="unix:///run/containerd/s/252cc004b058afc5482eebf3ab57b83994fbe945a2c93a2c0d8fd888749a7337" protocol=ttrpc version=3 Jul 1 08:35:43.068450 containerd[1551]: time="2025-07-01T08:35:43.068384697Z" level=info msg="CreateContainer within sandbox \"5a5c516d6d39caf73c2468c4e665d518ee7b6b819bcfac433e49bb06633578e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee8d332214df6da8bdce493488be31cbc284a124fed54732f00b2035e9dd20ff\"" Jul 1 08:35:43.069610 containerd[1551]: time="2025-07-01T08:35:43.069586239Z" level=info msg="StartContainer for \"ee8d332214df6da8bdce493488be31cbc284a124fed54732f00b2035e9dd20ff\"" Jul 1 08:35:43.070365 systemd[1]: Started cri-containerd-a7d4f5eb58b1b3318fc85de3999f7e2e008ffef14241270b840964c469699a3f.scope - libcontainer container a7d4f5eb58b1b3318fc85de3999f7e2e008ffef14241270b840964c469699a3f. Jul 1 08:35:43.074955 containerd[1551]: time="2025-07-01T08:35:43.074922530Z" level=info msg="connecting to shim ee8d332214df6da8bdce493488be31cbc284a124fed54732f00b2035e9dd20ff" address="unix:///run/containerd/s/49859cf4f2577341f296b976cc75908cc4970ecd5cddd04733284bbd574dbf20" protocol=ttrpc version=3 Jul 1 08:35:43.108424 systemd[1]: Started cri-containerd-ee8d332214df6da8bdce493488be31cbc284a124fed54732f00b2035e9dd20ff.scope - libcontainer container ee8d332214df6da8bdce493488be31cbc284a124fed54732f00b2035e9dd20ff. Jul 1 08:35:43.227088 containerd[1551]: time="2025-07-01T08:35:43.222026607Z" level=info msg="StartContainer for \"a7d4f5eb58b1b3318fc85de3999f7e2e008ffef14241270b840964c469699a3f\" returns successfully" Jul 1 08:35:43.263664 containerd[1551]: time="2025-07-01T08:35:43.263550617Z" level=info msg="StartContainer for \"ee8d332214df6da8bdce493488be31cbc284a124fed54732f00b2035e9dd20ff\" returns successfully" Jul 1 08:35:43.682524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288419687.mount: Deactivated successfully. Jul 1 08:35:43.885104 kubelet[2803]: I0701 08:35:43.883994 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h9nvb" podStartSLOduration=29.883857041 podStartE2EDuration="29.883857041s" podCreationTimestamp="2025-07-01 08:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:35:43.834657075 +0000 UTC m=+34.667313138" watchObservedRunningTime="2025-07-01 08:35:43.883857041 +0000 UTC m=+34.716513033" Jul 1 08:35:43.917991 kubelet[2803]: I0701 08:35:43.916871 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sb9xv" podStartSLOduration=29.916851425 podStartE2EDuration="29.916851425s" podCreationTimestamp="2025-07-01 08:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:35:43.916325219 +0000 UTC m=+34.748981221" watchObservedRunningTime="2025-07-01 08:35:43.916851425 +0000 UTC m=+34.749507397" Jul 1 08:36:36.471516 systemd[1]: Started sshd@7-172.24.4.49:22-172.24.4.1:39398.service - OpenSSH per-connection server daemon (172.24.4.1:39398). Jul 1 08:36:54.453652 systemd[1]: cri-containerd-e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44.scope: Deactivated successfully. Jul 1 08:36:54.454845 systemd[1]: cri-containerd-e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44.scope: Consumed 2.871s CPU time, 20.8M memory peak. Jul 1 08:36:54.467459 systemd[1]: cri-containerd-433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135.scope: Deactivated successfully. Jul 1 08:36:54.467907 systemd[1]: cri-containerd-433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135.scope: Consumed 4.286s CPU time, 50.3M memory peak. Jul 1 08:36:54.507429 systemd[1]: cri-containerd-68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a.scope: Deactivated successfully. Jul 1 08:36:54.562837 kubelet[2803]: E0701 08:36:54.524727 2803 controller.go:195] "Failed to update lease" err="Put \"https://172.24.4.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-s-39d8ad6622.novalocal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 1 08:36:54.562837 kubelet[2803]: E0701 08:36:54.525857 2803 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal.184e13bbad9fe7b9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal,UID:096de24328c9676aab98b000546c4460,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-9999-9-9-s-39d8ad6622.novalocal,},FirstTimestamp:2025-07-01 08:36:41.886451641 +0000 UTC m=+92.719107683,LastTimestamp:2025-07-01 08:36:41.886451641 +0000 UTC m=+92.719107683,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-9-s-39d8ad6622.novalocal,}" Jul 1 08:36:54.564780 containerd[1551]: time="2025-07-01T08:36:54.534279540Z" level=info msg="received exit event container_id:\"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\" id:\"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\" pid:3379 exit_status:1 exited_at:{seconds:1751359014 nanos:529150218}" Jul 1 08:36:54.564780 containerd[1551]: time="2025-07-01T08:36:54.535685086Z" level=info msg="received exit event container_id:\"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\" id:\"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\" pid:2643 exit_status:1 exited_at:{seconds:1751359014 nanos:523001784}" Jul 1 08:36:54.564780 containerd[1551]: time="2025-07-01T08:36:54.537261864Z" level=info msg="received exit event container_id:\"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\" id:\"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\" pid:2618 exit_status:1 exited_at:{seconds:1751359014 nanos:531317854}" Jul 1 08:36:54.564780 containerd[1551]: time="2025-07-01T08:36:54.544274970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\" id:\"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\" pid:2618 exit_status:1 exited_at:{seconds:1751359014 nanos:531317854}" Jul 1 08:36:54.564780 containerd[1551]: time="2025-07-01T08:36:54.544401878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\" id:\"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\" pid:3379 exit_status:1 exited_at:{seconds:1751359014 nanos:529150218}" Jul 1 08:36:54.564780 containerd[1551]: time="2025-07-01T08:36:54.551432016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\" id:\"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\" pid:2643 exit_status:1 exited_at:{seconds:1751359014 nanos:523001784}" Jul 1 08:36:54.648362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a-rootfs.mount: Deactivated successfully. Jul 1 08:36:54.675720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44-rootfs.mount: Deactivated successfully. Jul 1 08:36:54.693612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135-rootfs.mount: Deactivated successfully. Jul 1 08:36:54.695213 kubelet[2803]: E0701 08:36:54.695103 2803 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-9999-9-9-s-39d8ad6622.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 1 08:36:55.227039 sshd[4126]: Accepted publickey for core from 172.24.4.1 port 39398 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:36:55.233585 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:36:55.254050 systemd-logind[1530]: New session 10 of user core. Jul 1 08:36:55.260812 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 1 08:36:55.508942 kubelet[2803]: I0701 08:36:55.508719 2803 scope.go:117] "RemoveContainer" containerID="e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44" Jul 1 08:36:55.518728 kubelet[2803]: I0701 08:36:55.518629 2803 scope.go:117] "RemoveContainer" containerID="433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135" Jul 1 08:36:55.521877 containerd[1551]: time="2025-07-01T08:36:55.521502566Z" level=info msg="CreateContainer within sandbox \"40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 1 08:36:55.525908 containerd[1551]: time="2025-07-01T08:36:55.525856143Z" level=info msg="CreateContainer within sandbox \"8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 1 08:36:55.527802 kubelet[2803]: I0701 08:36:55.527758 2803 scope.go:117] "RemoveContainer" containerID="68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a" Jul 1 08:36:55.542786 containerd[1551]: time="2025-07-01T08:36:55.540361995Z" level=info msg="CreateContainer within sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jul 1 08:36:55.748306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583318337.mount: Deactivated successfully. Jul 1 08:36:55.756081 containerd[1551]: time="2025-07-01T08:36:55.755858031Z" level=info msg="Container 2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:36:55.797246 containerd[1551]: time="2025-07-01T08:36:55.795900968Z" level=info msg="Container 86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:36:55.801926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469961632.mount: Deactivated successfully. Jul 1 08:36:55.907632 containerd[1551]: time="2025-07-01T08:36:55.907559616Z" level=info msg="Container 6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:36:56.131313 containerd[1551]: time="2025-07-01T08:36:56.130974889Z" level=info msg="CreateContainer within sandbox \"40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25\"" Jul 1 08:36:56.135494 containerd[1551]: time="2025-07-01T08:36:56.135301755Z" level=info msg="StartContainer for \"86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25\"" Jul 1 08:36:56.142967 containerd[1551]: time="2025-07-01T08:36:56.142849473Z" level=info msg="connecting to shim 86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25" address="unix:///run/containerd/s/0163f98ad37e7e43c26b5c1ce4d28224590f7d4adc3c307b3c7c94cb4b012da4" protocol=ttrpc version=3 Jul 1 08:36:56.149917 sshd[4168]: Connection closed by 172.24.4.1 port 39398 Jul 1 08:36:56.149616 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Jul 1 08:36:56.179930 containerd[1551]: time="2025-07-01T08:36:56.179796992Z" level=info msg="CreateContainer within sandbox \"8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f\"" Jul 1 08:36:56.184404 systemd[1]: sshd@7-172.24.4.49:22-172.24.4.1:39398.service: Deactivated successfully. Jul 1 08:36:56.188617 containerd[1551]: time="2025-07-01T08:36:56.188309582Z" level=info msg="StartContainer for \"2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f\"" Jul 1 08:36:56.199943 systemd[1]: session-10.scope: Deactivated successfully. Jul 1 08:36:56.204872 systemd-logind[1530]: Session 10 logged out. Waiting for processes to exit. Jul 1 08:36:56.208298 systemd-logind[1530]: Removed session 10. Jul 1 08:36:56.209963 containerd[1551]: time="2025-07-01T08:36:56.209770590Z" level=info msg="connecting to shim 2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f" address="unix:///run/containerd/s/21759f0e7be244e7cf159961ece0d17b73d35ad02ad783a91484f5a9505d1d0c" protocol=ttrpc version=3 Jul 1 08:36:56.219646 containerd[1551]: time="2025-07-01T08:36:56.219573239Z" level=info msg="CreateContainer within sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\"" Jul 1 08:36:56.223246 containerd[1551]: time="2025-07-01T08:36:56.223129479Z" level=info msg="StartContainer for \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\"" Jul 1 08:36:56.227970 containerd[1551]: time="2025-07-01T08:36:56.227916700Z" level=info msg="connecting to shim 6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad" address="unix:///run/containerd/s/5da55d9a0ef3e4dcc7c9c05561194aab0a94dc97c1341c5d5e1e6892cb072eef" protocol=ttrpc version=3 Jul 1 08:36:56.251419 systemd[1]: Started cri-containerd-86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25.scope - libcontainer container 86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25. Jul 1 08:36:56.265372 systemd[1]: Started cri-containerd-2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f.scope - libcontainer container 2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f. Jul 1 08:36:56.281022 systemd[1]: Started cri-containerd-6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad.scope - libcontainer container 6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad. Jul 1 08:36:56.752750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149522313.mount: Deactivated successfully. Jul 1 08:36:58.203204 containerd[1551]: time="2025-07-01T08:36:58.202637383Z" level=info msg="StartContainer for \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\" returns successfully" Jul 1 08:36:58.208002 containerd[1551]: time="2025-07-01T08:36:58.202822530Z" level=info msg="StartContainer for \"2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f\" returns successfully" Jul 1 08:36:58.208002 containerd[1551]: time="2025-07-01T08:36:58.202645017Z" level=info msg="StartContainer for \"86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25\" returns successfully" Jul 1 08:37:01.186572 systemd[1]: Started sshd@8-172.24.4.49:22-172.24.4.1:60622.service - OpenSSH per-connection server daemon (172.24.4.1:60622). Jul 1 08:37:02.170152 sshd[4282]: Accepted publickey for core from 172.24.4.1 port 60622 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:02.173550 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:02.193259 systemd-logind[1530]: New session 11 of user core. Jul 1 08:37:02.198442 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 1 08:37:02.939116 sshd[4285]: Connection closed by 172.24.4.1 port 60622 Jul 1 08:37:02.938834 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:02.949745 systemd-logind[1530]: Session 11 logged out. Waiting for processes to exit. Jul 1 08:37:02.952281 systemd[1]: sshd@8-172.24.4.49:22-172.24.4.1:60622.service: Deactivated successfully. Jul 1 08:37:02.960775 systemd[1]: session-11.scope: Deactivated successfully. Jul 1 08:37:02.969509 systemd-logind[1530]: Removed session 11. Jul 1 08:37:07.972612 systemd[1]: Started sshd@9-172.24.4.49:22-172.24.4.1:33552.service - OpenSSH per-connection server daemon (172.24.4.1:33552). Jul 1 08:37:09.348561 sshd[4298]: Accepted publickey for core from 172.24.4.1 port 33552 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:09.352881 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:09.367194 systemd-logind[1530]: New session 12 of user core. Jul 1 08:37:09.383742 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 1 08:37:10.381183 sshd[4301]: Connection closed by 172.24.4.1 port 33552 Jul 1 08:37:10.382871 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:10.393598 systemd-logind[1530]: Session 12 logged out. Waiting for processes to exit. Jul 1 08:37:10.395307 systemd[1]: sshd@9-172.24.4.49:22-172.24.4.1:33552.service: Deactivated successfully. Jul 1 08:37:10.404229 systemd[1]: session-12.scope: Deactivated successfully. Jul 1 08:37:10.407711 systemd-logind[1530]: Removed session 12. Jul 1 08:37:15.395862 systemd[1]: Started sshd@10-172.24.4.49:22-172.24.4.1:54996.service - OpenSSH per-connection server daemon (172.24.4.1:54996). Jul 1 08:37:16.727237 sshd[4318]: Accepted publickey for core from 172.24.4.1 port 54996 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:16.732195 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:16.744663 systemd-logind[1530]: New session 13 of user core. Jul 1 08:37:16.760669 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 1 08:37:17.415697 sshd[4321]: Connection closed by 172.24.4.1 port 54996 Jul 1 08:37:17.417136 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:17.427132 systemd[1]: sshd@10-172.24.4.49:22-172.24.4.1:54996.service: Deactivated successfully. Jul 1 08:37:17.433855 systemd[1]: session-13.scope: Deactivated successfully. Jul 1 08:37:17.438463 systemd-logind[1530]: Session 13 logged out. Waiting for processes to exit. Jul 1 08:37:17.442650 systemd-logind[1530]: Removed session 13. Jul 1 08:37:18.424672 update_engine[1534]: I20250701 08:37:18.424309 1534 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 1 08:37:18.429546 update_engine[1534]: I20250701 08:37:18.426187 1534 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 1 08:37:18.429546 update_engine[1534]: I20250701 08:37:18.427608 1534 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 1 08:37:18.436226 update_engine[1534]: I20250701 08:37:18.432339 1534 omaha_request_params.cc:62] Current group set to developer Jul 1 08:37:18.436226 update_engine[1534]: I20250701 08:37:18.433528 1534 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 1 08:37:18.436226 update_engine[1534]: I20250701 08:37:18.433559 1534 update_attempter.cc:643] Scheduling an action processor start. Jul 1 08:37:18.436226 update_engine[1534]: I20250701 08:37:18.433609 1534 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 1 08:37:18.436226 update_engine[1534]: I20250701 08:37:18.433843 1534 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 1 08:37:18.436226 update_engine[1534]: I20250701 08:37:18.434027 1534 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 1 08:37:18.436226 update_engine[1534]: I20250701 08:37:18.434124 1534 omaha_request_action.cc:272] Request: Jul 1 08:37:18.436226 update_engine[1534]: Jul 1 08:37:18.436226 update_engine[1534]: Jul 1 08:37:18.436226 update_engine[1534]: Jul 1 08:37:18.436226 update_engine[1534]: Jul 1 08:37:18.436226 update_engine[1534]: Jul 1 08:37:18.436226 update_engine[1534]: Jul 1 08:37:18.436226 update_engine[1534]: Jul 1 08:37:18.436226 update_engine[1534]: Jul 1 08:37:18.436226 update_engine[1534]: I20250701 08:37:18.434163 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:37:18.443214 locksmithd[1578]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 1 08:37:18.445114 update_engine[1534]: I20250701 08:37:18.444449 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:37:18.446195 update_engine[1534]: I20250701 08:37:18.446110 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:37:18.454228 update_engine[1534]: E20250701 08:37:18.454115 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:37:18.454949 update_engine[1534]: I20250701 08:37:18.454871 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 1 08:37:22.448743 systemd[1]: Started sshd@11-172.24.4.49:22-172.24.4.1:54998.service - OpenSSH per-connection server daemon (172.24.4.1:54998). Jul 1 08:37:23.925201 sshd[4333]: Accepted publickey for core from 172.24.4.1 port 54998 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:23.928721 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:23.947265 systemd-logind[1530]: New session 14 of user core. Jul 1 08:37:23.960527 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 1 08:37:24.793122 sshd[4336]: Connection closed by 172.24.4.1 port 54998 Jul 1 08:37:24.792534 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:24.802427 systemd[1]: sshd@11-172.24.4.49:22-172.24.4.1:54998.service: Deactivated successfully. Jul 1 08:37:24.810680 systemd[1]: session-14.scope: Deactivated successfully. Jul 1 08:37:24.815588 systemd-logind[1530]: Session 14 logged out. Waiting for processes to exit. Jul 1 08:37:24.818358 systemd-logind[1530]: Removed session 14. Jul 1 08:37:28.427452 update_engine[1534]: I20250701 08:37:28.425607 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:37:28.436602 update_engine[1534]: I20250701 08:37:28.432548 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:37:28.438100 update_engine[1534]: I20250701 08:37:28.437477 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:37:28.444036 update_engine[1534]: E20250701 08:37:28.443490 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:37:28.444036 update_engine[1534]: I20250701 08:37:28.443931 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 1 08:37:29.817183 systemd[1]: Started sshd@12-172.24.4.49:22-172.24.4.1:40214.service - OpenSSH per-connection server daemon (172.24.4.1:40214). Jul 1 08:37:30.892128 sshd[4349]: Accepted publickey for core from 172.24.4.1 port 40214 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:30.895872 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:30.911381 systemd-logind[1530]: New session 15 of user core. Jul 1 08:37:30.920468 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 1 08:37:31.831931 sshd[4352]: Connection closed by 172.24.4.1 port 40214 Jul 1 08:37:31.830604 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:31.852004 systemd[1]: sshd@12-172.24.4.49:22-172.24.4.1:40214.service: Deactivated successfully. Jul 1 08:37:31.856636 systemd[1]: session-15.scope: Deactivated successfully. Jul 1 08:37:31.862536 systemd-logind[1530]: Session 15 logged out. Waiting for processes to exit. Jul 1 08:37:31.902831 systemd[1]: Started sshd@13-172.24.4.49:22-172.24.4.1:40216.service - OpenSSH per-connection server daemon (172.24.4.1:40216). Jul 1 08:37:31.907929 systemd-logind[1530]: Removed session 15. Jul 1 08:37:33.005005 sshd[4365]: Accepted publickey for core from 172.24.4.1 port 40216 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:33.008093 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:33.026181 systemd-logind[1530]: New session 16 of user core. Jul 1 08:37:33.033501 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 1 08:37:33.823286 sshd[4368]: Connection closed by 172.24.4.1 port 40216 Jul 1 08:37:33.825827 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:33.843827 systemd[1]: sshd@13-172.24.4.49:22-172.24.4.1:40216.service: Deactivated successfully. Jul 1 08:37:33.848446 systemd[1]: session-16.scope: Deactivated successfully. Jul 1 08:37:33.851718 systemd-logind[1530]: Session 16 logged out. Waiting for processes to exit. Jul 1 08:37:33.861907 systemd[1]: Started sshd@14-172.24.4.49:22-172.24.4.1:41608.service - OpenSSH per-connection server daemon (172.24.4.1:41608). Jul 1 08:37:33.864345 systemd-logind[1530]: Removed session 16. Jul 1 08:37:35.467323 sshd[4378]: Accepted publickey for core from 172.24.4.1 port 41608 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:35.472509 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:35.486449 systemd-logind[1530]: New session 17 of user core. Jul 1 08:37:35.490263 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 1 08:37:36.151048 sshd[4381]: Connection closed by 172.24.4.1 port 41608 Jul 1 08:37:36.152680 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:36.163370 systemd[1]: sshd@14-172.24.4.49:22-172.24.4.1:41608.service: Deactivated successfully. Jul 1 08:37:36.166388 systemd[1]: session-17.scope: Deactivated successfully. Jul 1 08:37:36.169736 systemd-logind[1530]: Session 17 logged out. Waiting for processes to exit. Jul 1 08:37:36.174220 systemd-logind[1530]: Removed session 17. Jul 1 08:37:38.425285 update_engine[1534]: I20250701 08:37:38.424881 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:37:38.427399 update_engine[1534]: I20250701 08:37:38.426491 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:37:38.427720 update_engine[1534]: I20250701 08:37:38.427641 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:37:38.433589 update_engine[1534]: E20250701 08:37:38.433473 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:37:38.433821 update_engine[1534]: I20250701 08:37:38.433650 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 1 08:37:41.212008 systemd[1]: Started sshd@15-172.24.4.49:22-172.24.4.1:41616.service - OpenSSH per-connection server daemon (172.24.4.1:41616). Jul 1 08:37:42.882098 sshd[4392]: Accepted publickey for core from 172.24.4.1 port 41616 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:42.891441 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:42.907116 systemd-logind[1530]: New session 18 of user core. Jul 1 08:37:42.915530 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 1 08:37:43.728758 sshd[4396]: Connection closed by 172.24.4.1 port 41616 Jul 1 08:37:43.728577 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:43.736360 systemd-logind[1530]: Session 18 logged out. Waiting for processes to exit. Jul 1 08:37:43.737029 systemd[1]: sshd@15-172.24.4.49:22-172.24.4.1:41616.service: Deactivated successfully. Jul 1 08:37:43.741910 systemd[1]: session-18.scope: Deactivated successfully. Jul 1 08:37:43.750340 systemd-logind[1530]: Removed session 18. Jul 1 08:37:48.426909 update_engine[1534]: I20250701 08:37:48.426750 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:37:48.427705 update_engine[1534]: I20250701 08:37:48.427163 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:37:48.427705 update_engine[1534]: I20250701 08:37:48.427470 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:37:48.432937 update_engine[1534]: E20250701 08:37:48.432869 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:37:48.432937 update_engine[1534]: I20250701 08:37:48.432916 1534 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 1 08:37:48.432937 update_engine[1534]: I20250701 08:37:48.432936 1534 omaha_request_action.cc:617] Omaha request response: Jul 1 08:37:48.433297 update_engine[1534]: E20250701 08:37:48.433096 1534 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 1 08:37:48.433370 update_engine[1534]: I20250701 08:37:48.433324 1534 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 1 08:37:48.433370 update_engine[1534]: I20250701 08:37:48.433332 1534 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 1 08:37:48.433370 update_engine[1534]: I20250701 08:37:48.433341 1534 update_attempter.cc:306] Processing Done. Jul 1 08:37:48.433653 update_engine[1534]: E20250701 08:37:48.433398 1534 update_attempter.cc:619] Update failed. Jul 1 08:37:48.433653 update_engine[1534]: I20250701 08:37:48.433413 1534 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 1 08:37:48.433653 update_engine[1534]: I20250701 08:37:48.433418 1534 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 1 08:37:48.433653 update_engine[1534]: I20250701 08:37:48.433423 1534 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 1 08:37:48.434009 update_engine[1534]: I20250701 08:37:48.433903 1534 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 1 08:37:48.434009 update_engine[1534]: I20250701 08:37:48.433983 1534 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 1 08:37:48.434009 update_engine[1534]: I20250701 08:37:48.433991 1534 omaha_request_action.cc:272] Request: Jul 1 08:37:48.434009 update_engine[1534]: Jul 1 08:37:48.434009 update_engine[1534]: Jul 1 08:37:48.434009 update_engine[1534]: Jul 1 08:37:48.434009 update_engine[1534]: Jul 1 08:37:48.434009 update_engine[1534]: Jul 1 08:37:48.434009 update_engine[1534]: Jul 1 08:37:48.434009 update_engine[1534]: I20250701 08:37:48.433998 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:37:48.434685 update_engine[1534]: I20250701 08:37:48.434163 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:37:48.434685 update_engine[1534]: I20250701 08:37:48.434363 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:37:48.435489 locksmithd[1578]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 1 08:37:48.439666 update_engine[1534]: E20250701 08:37:48.439603 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:37:48.439666 update_engine[1534]: I20250701 08:37:48.439652 1534 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 1 08:37:48.439666 update_engine[1534]: I20250701 08:37:48.439661 1534 omaha_request_action.cc:617] Omaha request response: Jul 1 08:37:48.439666 update_engine[1534]: I20250701 08:37:48.439667 1534 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 1 08:37:48.439666 update_engine[1534]: I20250701 08:37:48.439672 1534 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 1 08:37:48.439666 update_engine[1534]: I20250701 08:37:48.439677 1534 update_attempter.cc:306] Processing Done. Jul 1 08:37:48.439666 update_engine[1534]: I20250701 08:37:48.439682 1534 update_attempter.cc:310] Error event sent. Jul 1 08:37:48.440365 update_engine[1534]: I20250701 08:37:48.439700 1534 update_check_scheduler.cc:74] Next update check in 46m0s Jul 1 08:37:48.440484 locksmithd[1578]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 1 08:37:48.753227 systemd[1]: Started sshd@16-172.24.4.49:22-172.24.4.1:53328.service - OpenSSH per-connection server daemon (172.24.4.1:53328). Jul 1 08:37:50.195521 sshd[4409]: Accepted publickey for core from 172.24.4.1 port 53328 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:50.198005 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:50.209188 systemd-logind[1530]: New session 19 of user core. Jul 1 08:37:50.220541 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 1 08:37:51.050180 sshd[4412]: Connection closed by 172.24.4.1 port 53328 Jul 1 08:37:51.053306 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:51.072706 systemd[1]: sshd@16-172.24.4.49:22-172.24.4.1:53328.service: Deactivated successfully. Jul 1 08:37:51.079596 systemd[1]: session-19.scope: Deactivated successfully. Jul 1 08:37:51.084560 systemd-logind[1530]: Session 19 logged out. Waiting for processes to exit. Jul 1 08:37:51.098263 systemd[1]: Started sshd@17-172.24.4.49:22-172.24.4.1:53342.service - OpenSSH per-connection server daemon (172.24.4.1:53342). Jul 1 08:37:51.104307 systemd-logind[1530]: Removed session 19. Jul 1 08:37:52.368144 sshd[4424]: Accepted publickey for core from 172.24.4.1 port 53342 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:52.370356 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:52.385869 systemd-logind[1530]: New session 20 of user core. Jul 1 08:37:52.398562 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 1 08:37:53.497122 sshd[4427]: Connection closed by 172.24.4.1 port 53342 Jul 1 08:37:53.496277 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:53.514577 systemd[1]: sshd@17-172.24.4.49:22-172.24.4.1:53342.service: Deactivated successfully. Jul 1 08:37:53.517948 systemd[1]: session-20.scope: Deactivated successfully. Jul 1 08:37:53.520181 systemd-logind[1530]: Session 20 logged out. Waiting for processes to exit. Jul 1 08:37:53.528548 systemd-logind[1530]: Removed session 20. Jul 1 08:37:53.534433 systemd[1]: Started sshd@18-172.24.4.49:22-172.24.4.1:48256.service - OpenSSH per-connection server daemon (172.24.4.1:48256). Jul 1 08:37:54.663475 sshd[4437]: Accepted publickey for core from 172.24.4.1 port 48256 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:37:54.666149 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:54.681474 systemd-logind[1530]: New session 21 of user core. Jul 1 08:37:54.686459 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 1 08:37:56.654502 sshd[4440]: Connection closed by 172.24.4.1 port 48256 Jul 1 08:37:56.655895 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:56.664729 systemd[1]: sshd@18-172.24.4.49:22-172.24.4.1:48256.service: Deactivated successfully. Jul 1 08:37:56.666792 systemd[1]: session-21.scope: Deactivated successfully. Jul 1 08:37:56.668937 systemd-logind[1530]: Session 21 logged out. Waiting for processes to exit. Jul 1 08:37:56.675399 systemd[1]: Started sshd@19-172.24.4.49:22-172.24.4.1:48258.service - OpenSSH per-connection server daemon (172.24.4.1:48258). Jul 1 08:37:56.677179 systemd-logind[1530]: Removed session 21. Jul 1 08:38:10.332300 systemd[1]: cri-containerd-2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f.scope: Deactivated successfully. Jul 1 08:38:17.662150 containerd[1551]: time="2025-07-01T08:38:10.387580676Z" level=info msg="received exit event container_id:\"2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f\" id:\"2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f\" pid:4224 exit_status:1 exited_at:{seconds:1751359090 nanos:386388330}" Jul 1 08:38:17.662150 containerd[1551]: time="2025-07-01T08:38:10.388938263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f\" id:\"2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f\" pid:4224 exit_status:1 exited_at:{seconds:1751359090 nanos:386388330}" Jul 1 08:38:17.662150 containerd[1551]: time="2025-07-01T08:38:13.977750497Z" level=info msg="received exit event container_id:\"86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25\" id:\"86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25\" pid:4211 exit_status:1 exited_at:{seconds:1751359093 nanos:976976495}" Jul 1 08:38:17.662150 containerd[1551]: time="2025-07-01T08:38:13.978048907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25\" id:\"86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25\" pid:4211 exit_status:1 exited_at:{seconds:1751359093 nanos:976976495}" Jul 1 08:38:10.332790 systemd[1]: cri-containerd-2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f.scope: Consumed 3.704s CPU time, 49.4M memory peak. Jul 1 08:38:17.663266 sshd[4457]: Accepted publickey for core from 172.24.4.1 port 48258 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:38:17.663816 kubelet[2803]: E0701 08:38:14.081209 2803 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal.184e13bbe2b6edf7 kube-system 712 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-9999-9-9-s-39d8ad6622.novalocal,UID:096de24328c9676aab98b000546c4460,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-9999-9-9-s-39d8ad6622.novalocal,},FirstTimestamp:2025-07-01 08:36:42 +0000 UTC,LastTimestamp:2025-07-01 08:38:02.815393464 +0000 UTC m=+173.648049456,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-9-s-39d8ad6622.novalocal,}" Jul 1 08:38:17.663816 kubelet[2803]: E0701 08:38:16.997500 2803 controller.go:195] "Failed to update lease" err="Put \"https://172.24.4.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-s-39d8ad6622.novalocal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 1 08:38:17.659682 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:10.514281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f-rootfs.mount: Deactivated successfully. Jul 1 08:38:13.973256 systemd[1]: cri-containerd-86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25.scope: Deactivated successfully. Jul 1 08:38:13.973924 systemd[1]: cri-containerd-86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25.scope: Consumed 2.622s CPU time, 18.5M memory peak. Jul 1 08:38:14.023722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25-rootfs.mount: Deactivated successfully. Jul 1 08:38:17.675541 systemd-logind[1530]: New session 22 of user core. Jul 1 08:38:17.676960 systemd[1]: cri-containerd-6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad.scope: Deactivated successfully. Jul 1 08:38:17.683040 containerd[1551]: time="2025-07-01T08:38:17.682939768Z" level=info msg="received exit event container_id:\"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\" id:\"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\" pid:4231 exit_status:1 exited_at:{seconds:1751359097 nanos:681676247}" Jul 1 08:38:17.683903 containerd[1551]: time="2025-07-01T08:38:17.683835588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\" id:\"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\" pid:4231 exit_status:1 exited_at:{seconds:1751359097 nanos:681676247}" Jul 1 08:38:17.685080 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 1 08:38:17.750688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad-rootfs.mount: Deactivated successfully. Jul 1 08:38:18.744135 kubelet[2803]: E0701 08:38:18.744017 2803 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-9999-9-9-s-39d8ad6622.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 1 08:38:24.210311 kubelet[2803]: E0701 08:38:24.209320 2803 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.736s" Jul 1 08:38:24.243735 kubelet[2803]: I0701 08:38:24.242754 2803 scope.go:117] "RemoveContainer" containerID="e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44" Jul 1 08:38:24.248545 kubelet[2803]: I0701 08:38:24.248468 2803 scope.go:117] "RemoveContainer" containerID="86b6935d39d7881ca49dc50c3fd7e7871dc873d5aded6eec12e7d60045be2b25" Jul 1 08:38:24.257134 containerd[1551]: time="2025-07-01T08:38:24.255611649Z" level=info msg="RemoveContainer for \"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\"" Jul 1 08:38:24.277004 containerd[1551]: time="2025-07-01T08:38:24.276860624Z" level=info msg="CreateContainer within sandbox \"40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jul 1 08:38:24.285022 kubelet[2803]: I0701 08:38:24.284977 2803 scope.go:117] "RemoveContainer" containerID="2d01e89da38bdfd221ee3b5df46b3b3e3df6146b167b10b62d12d9b1bd2b7f3f" Jul 1 08:38:24.290974 kubelet[2803]: I0701 08:38:24.290939 2803 scope.go:117] "RemoveContainer" containerID="6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad" Jul 1 08:38:24.291669 kubelet[2803]: E0701 08:38:24.291559 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cilium-operator pod=cilium-operator-6c4d7847fc-htg9w_kube-system(489c1c0b-9a01-4d83-a65f-9e542bbf37ba)\"" pod="kube-system/cilium-operator-6c4d7847fc-htg9w" podUID="489c1c0b-9a01-4d83-a65f-9e542bbf37ba" Jul 1 08:38:24.291788 containerd[1551]: time="2025-07-01T08:38:24.291666214Z" level=info msg="CreateContainer within sandbox \"8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jul 1 08:38:24.440021 containerd[1551]: time="2025-07-01T08:38:24.438369809Z" level=info msg="RemoveContainer for \"e87f3c7110ac13b1312ccb6ba85726c29b493f678465b6d19c13af51b7c89a44\" returns successfully" Jul 1 08:38:24.440263 kubelet[2803]: I0701 08:38:24.439266 2803 scope.go:117] "RemoveContainer" containerID="433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135" Jul 1 08:38:24.450202 containerd[1551]: time="2025-07-01T08:38:24.450053304Z" level=info msg="RemoveContainer for \"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\"" Jul 1 08:38:24.605250 containerd[1551]: time="2025-07-01T08:38:24.603696525Z" level=info msg="Container 00b5bfd0fb92a29038412512f768bc8fa8b3e48656fd113b1155d776e814f48c: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:24.670641 containerd[1551]: time="2025-07-01T08:38:24.670586396Z" level=info msg="RemoveContainer for \"433dd6ea4ed61989ffb9a8f7f9ea576a77c3ffbd87b044437442849553419135\" returns successfully" Jul 1 08:38:24.672405 kubelet[2803]: I0701 08:38:24.672352 2803 scope.go:117] "RemoveContainer" containerID="68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a" Jul 1 08:38:24.676771 containerd[1551]: time="2025-07-01T08:38:24.676449654Z" level=info msg="RemoveContainer for \"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\"" Jul 1 08:38:24.796315 containerd[1551]: time="2025-07-01T08:38:24.796225953Z" level=info msg="Container 4a8e70c4111fcb54e481ae00b25237a5dc4243974cc6538707fdc6f28e9b6c1c: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:24.816588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466569912.mount: Deactivated successfully. Jul 1 08:38:25.040011 containerd[1551]: time="2025-07-01T08:38:25.039954955Z" level=info msg="RemoveContainer for \"68a0f309c46583272305eb68d118b94aa9c0acf49c6e6371f8dcb5bfd27be28a\" returns successfully" Jul 1 08:38:25.253917 containerd[1551]: time="2025-07-01T08:38:25.253810704Z" level=info msg="CreateContainer within sandbox \"40d23bf9946524950f273819d69fb289be49d653de3ad0be7c460327bf55108b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"00b5bfd0fb92a29038412512f768bc8fa8b3e48656fd113b1155d776e814f48c\"" Jul 1 08:38:25.255553 containerd[1551]: time="2025-07-01T08:38:25.255132654Z" level=info msg="StartContainer for \"00b5bfd0fb92a29038412512f768bc8fa8b3e48656fd113b1155d776e814f48c\"" Jul 1 08:38:25.257931 sshd[4490]: Connection closed by 172.24.4.1 port 48258 Jul 1 08:38:25.259333 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:25.265912 containerd[1551]: time="2025-07-01T08:38:25.265843915Z" level=info msg="connecting to shim 00b5bfd0fb92a29038412512f768bc8fa8b3e48656fd113b1155d776e814f48c" address="unix:///run/containerd/s/0163f98ad37e7e43c26b5c1ce4d28224590f7d4adc3c307b3c7c94cb4b012da4" protocol=ttrpc version=3 Jul 1 08:38:25.281844 systemd[1]: sshd@19-172.24.4.49:22-172.24.4.1:48258.service: Deactivated successfully. Jul 1 08:38:25.288860 systemd[1]: session-22.scope: Deactivated successfully. Jul 1 08:38:25.293000 systemd-logind[1530]: Session 22 logged out. Waiting for processes to exit. Jul 1 08:38:25.301191 systemd[1]: Started sshd@20-172.24.4.49:22-172.24.4.1:58684.service - OpenSSH per-connection server daemon (172.24.4.1:58684). Jul 1 08:38:25.304223 systemd-logind[1530]: Removed session 22. Jul 1 08:38:25.312303 containerd[1551]: time="2025-07-01T08:38:25.311780806Z" level=info msg="CreateContainer within sandbox \"8e8f7d26ada2bf738f7a0b8374bb66c3ee9bf58e9cd3f83abc3dac6507c75564\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"4a8e70c4111fcb54e481ae00b25237a5dc4243974cc6538707fdc6f28e9b6c1c\"" Jul 1 08:38:25.312491 containerd[1551]: time="2025-07-01T08:38:25.312427429Z" level=info msg="StartContainer for \"4a8e70c4111fcb54e481ae00b25237a5dc4243974cc6538707fdc6f28e9b6c1c\"" Jul 1 08:38:25.317138 containerd[1551]: time="2025-07-01T08:38:25.315427796Z" level=info msg="connecting to shim 4a8e70c4111fcb54e481ae00b25237a5dc4243974cc6538707fdc6f28e9b6c1c" address="unix:///run/containerd/s/21759f0e7be244e7cf159961ece0d17b73d35ad02ad783a91484f5a9505d1d0c" protocol=ttrpc version=3 Jul 1 08:38:25.343263 systemd[1]: Started cri-containerd-00b5bfd0fb92a29038412512f768bc8fa8b3e48656fd113b1155d776e814f48c.scope - libcontainer container 00b5bfd0fb92a29038412512f768bc8fa8b3e48656fd113b1155d776e814f48c. Jul 1 08:38:25.377268 systemd[1]: Started cri-containerd-4a8e70c4111fcb54e481ae00b25237a5dc4243974cc6538707fdc6f28e9b6c1c.scope - libcontainer container 4a8e70c4111fcb54e481ae00b25237a5dc4243974cc6538707fdc6f28e9b6c1c. Jul 1 08:38:25.836090 containerd[1551]: time="2025-07-01T08:38:25.835843516Z" level=info msg="StartContainer for \"4a8e70c4111fcb54e481ae00b25237a5dc4243974cc6538707fdc6f28e9b6c1c\" returns successfully" Jul 1 08:38:25.839254 containerd[1551]: time="2025-07-01T08:38:25.839198629Z" level=info msg="StartContainer for \"00b5bfd0fb92a29038412512f768bc8fa8b3e48656fd113b1155d776e814f48c\" returns successfully" Jul 1 08:38:28.154245 sshd[4513]: Accepted publickey for core from 172.24.4.1 port 58684 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:38:28.161539 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:28.190256 systemd-logind[1530]: New session 23 of user core. Jul 1 08:38:28.198350 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 1 08:38:28.906160 sshd[4582]: Connection closed by 172.24.4.1 port 58684 Jul 1 08:38:28.905801 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:28.922005 systemd[1]: sshd@20-172.24.4.49:22-172.24.4.1:58684.service: Deactivated successfully. Jul 1 08:38:28.933825 systemd[1]: session-23.scope: Deactivated successfully. Jul 1 08:38:28.936347 systemd-logind[1530]: Session 23 logged out. Waiting for processes to exit. Jul 1 08:38:28.940137 systemd-logind[1530]: Removed session 23. Jul 1 08:38:32.696242 kubelet[2803]: I0701 08:38:32.696169 2803 scope.go:117] "RemoveContainer" containerID="6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad" Jul 1 08:38:32.707310 containerd[1551]: time="2025-07-01T08:38:32.706775083Z" level=info msg="CreateContainer within sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" for container &ContainerMetadata{Name:cilium-operator,Attempt:2,}" Jul 1 08:38:32.730072 containerd[1551]: time="2025-07-01T08:38:32.728093559Z" level=info msg="Container 269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:32.743749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412163179.mount: Deactivated successfully. Jul 1 08:38:32.760095 containerd[1551]: time="2025-07-01T08:38:32.759965319Z" level=info msg="CreateContainer within sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" for &ContainerMetadata{Name:cilium-operator,Attempt:2,} returns container id \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\"" Jul 1 08:38:32.762161 containerd[1551]: time="2025-07-01T08:38:32.762045461Z" level=info msg="StartContainer for \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\"" Jul 1 08:38:32.764919 containerd[1551]: time="2025-07-01T08:38:32.764858127Z" level=info msg="connecting to shim 269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c" address="unix:///run/containerd/s/5da55d9a0ef3e4dcc7c9c05561194aab0a94dc97c1341c5d5e1e6892cb072eef" protocol=ttrpc version=3 Jul 1 08:38:32.810540 systemd[1]: Started cri-containerd-269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c.scope - libcontainer container 269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c. Jul 1 08:38:32.901592 containerd[1551]: time="2025-07-01T08:38:32.901455155Z" level=info msg="StartContainer for \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" returns successfully" Jul 1 08:38:33.927672 systemd[1]: Started sshd@21-172.24.4.49:22-172.24.4.1:51822.service - OpenSSH per-connection server daemon (172.24.4.1:51822). Jul 1 08:38:35.386177 sshd[4627]: Accepted publickey for core from 172.24.4.1 port 51822 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:38:35.391225 sshd-session[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:35.404197 systemd-logind[1530]: New session 24 of user core. Jul 1 08:38:35.415773 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 1 08:38:36.144665 sshd[4630]: Connection closed by 172.24.4.1 port 51822 Jul 1 08:38:36.146153 sshd-session[4627]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:36.154597 systemd[1]: sshd@21-172.24.4.49:22-172.24.4.1:51822.service: Deactivated successfully. Jul 1 08:38:36.159175 systemd[1]: session-24.scope: Deactivated successfully. Jul 1 08:38:36.162273 systemd-logind[1530]: Session 24 logged out. Waiting for processes to exit. Jul 1 08:38:36.165342 systemd-logind[1530]: Removed session 24. Jul 1 08:38:41.174862 systemd[1]: Started sshd@22-172.24.4.49:22-172.24.4.1:51836.service - OpenSSH per-connection server daemon (172.24.4.1:51836). Jul 1 08:38:42.521159 sshd[4642]: Accepted publickey for core from 172.24.4.1 port 51836 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:38:42.526491 sshd-session[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:42.546337 systemd-logind[1530]: New session 25 of user core. Jul 1 08:38:42.558570 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 1 08:38:43.262559 sshd[4645]: Connection closed by 172.24.4.1 port 51836 Jul 1 08:38:43.263708 sshd-session[4642]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:43.276573 systemd[1]: sshd@22-172.24.4.49:22-172.24.4.1:51836.service: Deactivated successfully. Jul 1 08:38:43.285033 systemd[1]: session-25.scope: Deactivated successfully. Jul 1 08:38:43.291939 systemd-logind[1530]: Session 25 logged out. Waiting for processes to exit. Jul 1 08:38:43.298012 systemd-logind[1530]: Removed session 25. Jul 1 08:38:48.306761 systemd[1]: Started sshd@23-172.24.4.49:22-172.24.4.1:53898.service - OpenSSH per-connection server daemon (172.24.4.1:53898). Jul 1 08:38:49.588037 sshd[4660]: Accepted publickey for core from 172.24.4.1 port 53898 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:38:49.589774 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:49.609310 systemd-logind[1530]: New session 26 of user core. Jul 1 08:38:49.616621 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 1 08:38:50.482943 sshd[4663]: Connection closed by 172.24.4.1 port 53898 Jul 1 08:38:50.482678 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:50.495210 systemd[1]: sshd@23-172.24.4.49:22-172.24.4.1:53898.service: Deactivated successfully. Jul 1 08:38:50.504523 systemd[1]: session-26.scope: Deactivated successfully. Jul 1 08:38:50.509242 systemd-logind[1530]: Session 26 logged out. Waiting for processes to exit. Jul 1 08:38:50.514383 systemd-logind[1530]: Removed session 26. Jul 1 08:38:55.518434 systemd[1]: Started sshd@24-172.24.4.49:22-172.24.4.1:56648.service - OpenSSH per-connection server daemon (172.24.4.1:56648). Jul 1 08:38:56.611847 sshd[4676]: Accepted publickey for core from 172.24.4.1 port 56648 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:38:56.614751 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:56.630203 systemd-logind[1530]: New session 27 of user core. Jul 1 08:38:56.639461 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 1 08:38:57.490797 sshd[4679]: Connection closed by 172.24.4.1 port 56648 Jul 1 08:38:57.492525 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:57.501676 systemd[1]: sshd@24-172.24.4.49:22-172.24.4.1:56648.service: Deactivated successfully. Jul 1 08:38:57.509984 systemd[1]: session-27.scope: Deactivated successfully. Jul 1 08:38:57.515580 systemd-logind[1530]: Session 27 logged out. Waiting for processes to exit. Jul 1 08:38:57.518994 systemd-logind[1530]: Removed session 27. Jul 1 08:39:02.534129 systemd[1]: Started sshd@25-172.24.4.49:22-172.24.4.1:56654.service - OpenSSH per-connection server daemon (172.24.4.1:56654). Jul 1 08:39:03.780182 sshd[4691]: Accepted publickey for core from 172.24.4.1 port 56654 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:39:03.787997 sshd-session[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:03.801219 systemd-logind[1530]: New session 28 of user core. Jul 1 08:39:03.814646 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 1 08:39:04.721582 sshd[4694]: Connection closed by 172.24.4.1 port 56654 Jul 1 08:39:04.720339 sshd-session[4691]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:04.743818 systemd[1]: sshd@25-172.24.4.49:22-172.24.4.1:56654.service: Deactivated successfully. Jul 1 08:39:04.753598 systemd[1]: session-28.scope: Deactivated successfully. Jul 1 08:39:04.760247 systemd-logind[1530]: Session 28 logged out. Waiting for processes to exit. Jul 1 08:39:04.772821 systemd[1]: Started sshd@26-172.24.4.49:22-172.24.4.1:52048.service - OpenSSH per-connection server daemon (172.24.4.1:52048). Jul 1 08:39:04.776641 systemd-logind[1530]: Removed session 28. Jul 1 08:39:05.907243 sshd[4705]: Accepted publickey for core from 172.24.4.1 port 52048 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:39:05.911244 sshd-session[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:05.926658 systemd-logind[1530]: New session 29 of user core. Jul 1 08:39:05.934460 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 1 08:39:08.011796 containerd[1551]: time="2025-07-01T08:39:08.011668826Z" level=info msg="StopContainer for \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" with timeout 30 (s)" Jul 1 08:39:08.015555 containerd[1551]: time="2025-07-01T08:39:08.014857267Z" level=info msg="Stop container \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" with signal terminated" Jul 1 08:39:08.055995 systemd[1]: cri-containerd-269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c.scope: Deactivated successfully. Jul 1 08:39:08.059900 containerd[1551]: time="2025-07-01T08:39:08.059733909Z" level=info msg="received exit event container_id:\"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" id:\"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" pid:4607 exited_at:{seconds:1751359148 nanos:58038680}" Jul 1 08:39:08.061087 containerd[1551]: time="2025-07-01T08:39:08.059809391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" id:\"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" pid:4607 exited_at:{seconds:1751359148 nanos:58038680}" Jul 1 08:39:08.062217 containerd[1551]: time="2025-07-01T08:39:08.061836252Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:39:08.070477 containerd[1551]: time="2025-07-01T08:39:08.070395857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" id:\"734642a1c569f6edf370ee1878b905d596f912038a3441ff7a9c41349cba320c\" pid:4736 exited_at:{seconds:1751359148 nanos:69626604}" Jul 1 08:39:08.089432 containerd[1551]: time="2025-07-01T08:39:08.089355308Z" level=info msg="StopContainer for \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" with timeout 2 (s)" Jul 1 08:39:08.090458 containerd[1551]: time="2025-07-01T08:39:08.090418512Z" level=info msg="Stop container \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" with signal terminated" Jul 1 08:39:08.110616 systemd-networkd[1438]: lxc_health: Link DOWN Jul 1 08:39:08.110626 systemd-networkd[1438]: lxc_health: Lost carrier Jul 1 08:39:08.139776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c-rootfs.mount: Deactivated successfully. Jul 1 08:39:08.145170 systemd[1]: cri-containerd-224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f.scope: Deactivated successfully. Jul 1 08:39:08.145627 systemd[1]: cri-containerd-224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f.scope: Consumed 12.029s CPU time, 126.3M memory peak, 136K read from disk, 13.3M written to disk. Jul 1 08:39:08.149593 containerd[1551]: time="2025-07-01T08:39:08.149271330Z" level=info msg="received exit event container_id:\"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" id:\"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" pid:3454 exited_at:{seconds:1751359148 nanos:147323186}" Jul 1 08:39:08.150917 containerd[1551]: time="2025-07-01T08:39:08.150788205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" id:\"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" pid:3454 exited_at:{seconds:1751359148 nanos:147323186}" Jul 1 08:39:08.156075 containerd[1551]: time="2025-07-01T08:39:08.155868193Z" level=info msg="StopContainer for \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" returns successfully" Jul 1 08:39:08.157299 containerd[1551]: time="2025-07-01T08:39:08.157267327Z" level=info msg="StopPodSandbox for \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\"" Jul 1 08:39:08.157450 containerd[1551]: time="2025-07-01T08:39:08.157390048Z" level=info msg="Container to stop \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:08.157450 containerd[1551]: time="2025-07-01T08:39:08.157408883Z" level=info msg="Container to stop \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:08.182128 systemd[1]: cri-containerd-bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110.scope: Deactivated successfully. Jul 1 08:39:08.184606 containerd[1551]: time="2025-07-01T08:39:08.184358300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" id:\"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" pid:3030 exit_status:137 exited_at:{seconds:1751359148 nanos:183584800}" Jul 1 08:39:08.191930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f-rootfs.mount: Deactivated successfully. Jul 1 08:39:08.213236 containerd[1551]: time="2025-07-01T08:39:08.213168127Z" level=info msg="StopContainer for \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" returns successfully" Jul 1 08:39:08.214977 containerd[1551]: time="2025-07-01T08:39:08.214941433Z" level=info msg="StopPodSandbox for \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\"" Jul 1 08:39:08.216453 containerd[1551]: time="2025-07-01T08:39:08.215074883Z" level=info msg="Container to stop \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:08.216453 containerd[1551]: time="2025-07-01T08:39:08.216162363Z" level=info msg="Container to stop \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:08.216453 containerd[1551]: time="2025-07-01T08:39:08.216186929Z" level=info msg="Container to stop \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:08.216453 containerd[1551]: time="2025-07-01T08:39:08.216202498Z" level=info msg="Container to stop \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:08.216453 containerd[1551]: time="2025-07-01T08:39:08.216214060Z" level=info msg="Container to stop \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:08.236328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110-rootfs.mount: Deactivated successfully. Jul 1 08:39:08.242384 containerd[1551]: time="2025-07-01T08:39:08.242330145Z" level=info msg="shim disconnected" id=bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110 namespace=k8s.io Jul 1 08:39:08.242643 containerd[1551]: time="2025-07-01T08:39:08.242576326Z" level=warning msg="cleaning up after shim disconnected" id=bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110 namespace=k8s.io Jul 1 08:39:08.242988 containerd[1551]: time="2025-07-01T08:39:08.242600291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 1 08:39:08.244761 systemd[1]: cri-containerd-5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1.scope: Deactivated successfully. Jul 1 08:39:08.281353 containerd[1551]: time="2025-07-01T08:39:08.280553549Z" level=info msg="received exit event sandbox_id:\"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" exit_status:137 exited_at:{seconds:1751359148 nanos:183584800}" Jul 1 08:39:08.282348 containerd[1551]: time="2025-07-01T08:39:08.281486579Z" level=error msg="Failed to handle event container_id:\"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" id:\"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" pid:3030 exit_status:137 exited_at:{seconds:1751359148 nanos:183584800} for bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Jul 1 08:39:08.282348 containerd[1551]: time="2025-07-01T08:39:08.281657920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" id:\"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" pid:2947 exit_status:137 exited_at:{seconds:1751359148 nanos:252397899}" Jul 1 08:39:08.283692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1-rootfs.mount: Deactivated successfully. Jul 1 08:39:08.286981 containerd[1551]: time="2025-07-01T08:39:08.286934738Z" level=info msg="TearDown network for sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" successfully" Jul 1 08:39:08.287481 containerd[1551]: time="2025-07-01T08:39:08.286973019Z" level=info msg="StopPodSandbox for \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" returns successfully" Jul 1 08:39:08.289552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110-shm.mount: Deactivated successfully. Jul 1 08:39:08.292939 containerd[1551]: time="2025-07-01T08:39:08.292613288Z" level=info msg="received exit event sandbox_id:\"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" exit_status:137 exited_at:{seconds:1751359148 nanos:252397899}" Jul 1 08:39:08.294826 containerd[1551]: time="2025-07-01T08:39:08.294764714Z" level=info msg="shim disconnected" id=5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1 namespace=k8s.io Jul 1 08:39:08.295204 containerd[1551]: time="2025-07-01T08:39:08.295096296Z" level=warning msg="cleaning up after shim disconnected" id=5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1 namespace=k8s.io Jul 1 08:39:08.295204 containerd[1551]: time="2025-07-01T08:39:08.295113929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 1 08:39:08.297075 containerd[1551]: time="2025-07-01T08:39:08.295044478Z" level=info msg="TearDown network for sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" successfully" Jul 1 08:39:08.297075 containerd[1551]: time="2025-07-01T08:39:08.296260960Z" level=info msg="StopPodSandbox for \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" returns successfully" Jul 1 08:39:08.335146 kubelet[2803]: I0701 08:39:08.335011 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-hubble-tls\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338080 kubelet[2803]: I0701 08:39:08.336153 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-bpf-maps\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338080 kubelet[2803]: I0701 08:39:08.336197 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-clustermesh-secrets\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338080 kubelet[2803]: I0701 08:39:08.336222 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-cgroup\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338080 kubelet[2803]: I0701 08:39:08.336256 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/489c1c0b-9a01-4d83-a65f-9e542bbf37ba-cilium-config-path\") pod \"489c1c0b-9a01-4d83-a65f-9e542bbf37ba\" (UID: \"489c1c0b-9a01-4d83-a65f-9e542bbf37ba\") " Jul 1 08:39:08.338080 kubelet[2803]: I0701 08:39:08.336286 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-config-path\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338080 kubelet[2803]: I0701 08:39:08.336304 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-xtables-lock\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338346 kubelet[2803]: I0701 08:39:08.336331 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cni-path\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338346 kubelet[2803]: I0701 08:39:08.336356 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-host-proc-sys-net\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338346 kubelet[2803]: I0701 08:39:08.336382 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-run\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338346 kubelet[2803]: I0701 08:39:08.336407 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55ll7\" (UniqueName: \"kubernetes.io/projected/489c1c0b-9a01-4d83-a65f-9e542bbf37ba-kube-api-access-55ll7\") pod \"489c1c0b-9a01-4d83-a65f-9e542bbf37ba\" (UID: \"489c1c0b-9a01-4d83-a65f-9e542bbf37ba\") " Jul 1 08:39:08.338346 kubelet[2803]: I0701 08:39:08.336431 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-lib-modules\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338346 kubelet[2803]: I0701 08:39:08.336466 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-host-proc-sys-kernel\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338525 kubelet[2803]: I0701 08:39:08.336493 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-hostproc\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338525 kubelet[2803]: I0701 08:39:08.336520 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wbmj\" (UniqueName: \"kubernetes.io/projected/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-kube-api-access-8wbmj\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338525 kubelet[2803]: I0701 08:39:08.336554 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-etc-cni-netd\") pod \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\" (UID: \"26f99ec5-703e-4073-b4cc-a22c44f1ac1a\") " Jul 1 08:39:08.338525 kubelet[2803]: I0701 08:39:08.336689 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.338525 kubelet[2803]: I0701 08:39:08.336760 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.340083 kubelet[2803]: I0701 08:39:08.339003 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.340083 kubelet[2803]: I0701 08:39:08.339097 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.344130 kubelet[2803]: I0701 08:39:08.343464 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489c1c0b-9a01-4d83-a65f-9e542bbf37ba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "489c1c0b-9a01-4d83-a65f-9e542bbf37ba" (UID: "489c1c0b-9a01-4d83-a65f-9e542bbf37ba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 1 08:39:08.344756 kubelet[2803]: I0701 08:39:08.344417 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.349087 kubelet[2803]: I0701 08:39:08.345554 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.349087 kubelet[2803]: I0701 08:39:08.345573 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.349087 kubelet[2803]: I0701 08:39:08.345605 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-hostproc" (OuterVolumeSpecName: "hostproc") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.349087 kubelet[2803]: I0701 08:39:08.347132 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.349087 kubelet[2803]: I0701 08:39:08.347159 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cni-path" (OuterVolumeSpecName: "cni-path") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 1 08:39:08.357647 kubelet[2803]: I0701 08:39:08.357399 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489c1c0b-9a01-4d83-a65f-9e542bbf37ba-kube-api-access-55ll7" (OuterVolumeSpecName: "kube-api-access-55ll7") pod "489c1c0b-9a01-4d83-a65f-9e542bbf37ba" (UID: "489c1c0b-9a01-4d83-a65f-9e542bbf37ba"). InnerVolumeSpecName "kube-api-access-55ll7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 1 08:39:08.357647 kubelet[2803]: I0701 08:39:08.357470 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 1 08:39:08.357842 kubelet[2803]: I0701 08:39:08.357693 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-kube-api-access-8wbmj" (OuterVolumeSpecName: "kube-api-access-8wbmj") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "kube-api-access-8wbmj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 1 08:39:08.358399 kubelet[2803]: I0701 08:39:08.358359 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 1 08:39:08.360141 kubelet[2803]: I0701 08:39:08.359012 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "26f99ec5-703e-4073-b4cc-a22c44f1ac1a" (UID: "26f99ec5-703e-4073-b4cc-a22c44f1ac1a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 1 08:39:08.437873 kubelet[2803]: I0701 08:39:08.437619 2803 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cni-path\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.437873 kubelet[2803]: I0701 08:39:08.437661 2803 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-host-proc-sys-net\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.437873 kubelet[2803]: I0701 08:39:08.437683 2803 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-run\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.437873 kubelet[2803]: I0701 08:39:08.437696 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55ll7\" (UniqueName: \"kubernetes.io/projected/489c1c0b-9a01-4d83-a65f-9e542bbf37ba-kube-api-access-55ll7\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.437873 kubelet[2803]: I0701 08:39:08.437707 2803 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-lib-modules\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.437873 kubelet[2803]: I0701 08:39:08.437727 2803 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-host-proc-sys-kernel\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.437873 kubelet[2803]: I0701 08:39:08.437737 2803 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-hostproc\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438292 kubelet[2803]: I0701 08:39:08.437748 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8wbmj\" (UniqueName: \"kubernetes.io/projected/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-kube-api-access-8wbmj\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438292 kubelet[2803]: I0701 08:39:08.437758 2803 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-etc-cni-netd\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438292 kubelet[2803]: I0701 08:39:08.437767 2803 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-hubble-tls\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438292 kubelet[2803]: I0701 08:39:08.437791 2803 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-bpf-maps\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438292 kubelet[2803]: I0701 08:39:08.437804 2803 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-clustermesh-secrets\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438292 kubelet[2803]: I0701 08:39:08.437814 2803 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-cgroup\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438292 kubelet[2803]: I0701 08:39:08.437824 2803 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/489c1c0b-9a01-4d83-a65f-9e542bbf37ba-cilium-config-path\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438568 kubelet[2803]: I0701 08:39:08.437833 2803 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-cilium-config-path\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.438568 kubelet[2803]: I0701 08:39:08.437845 2803 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26f99ec5-703e-4073-b4cc-a22c44f1ac1a-xtables-lock\") on node \"ci-9999-9-9-s-39d8ad6622.novalocal\" DevicePath \"\"" Jul 1 08:39:08.608713 kubelet[2803]: I0701 08:39:08.607343 2803 scope.go:117] "RemoveContainer" containerID="224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f" Jul 1 08:39:08.622561 containerd[1551]: time="2025-07-01T08:39:08.621711519Z" level=info msg="RemoveContainer for \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\"" Jul 1 08:39:08.629921 systemd[1]: Removed slice kubepods-burstable-pod26f99ec5_703e_4073_b4cc_a22c44f1ac1a.slice - libcontainer container kubepods-burstable-pod26f99ec5_703e_4073_b4cc_a22c44f1ac1a.slice. Jul 1 08:39:08.630539 systemd[1]: kubepods-burstable-pod26f99ec5_703e_4073_b4cc_a22c44f1ac1a.slice: Consumed 12.193s CPU time, 126.8M memory peak, 136K read from disk, 13.3M written to disk. Jul 1 08:39:08.638093 containerd[1551]: time="2025-07-01T08:39:08.638021080Z" level=info msg="RemoveContainer for \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" returns successfully" Jul 1 08:39:08.639215 kubelet[2803]: I0701 08:39:08.639151 2803 scope.go:117] "RemoveContainer" containerID="950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f" Jul 1 08:39:08.648849 systemd[1]: Removed slice kubepods-besteffort-pod489c1c0b_9a01_4d83_a65f_9e542bbf37ba.slice - libcontainer container kubepods-besteffort-pod489c1c0b_9a01_4d83_a65f_9e542bbf37ba.slice. Jul 1 08:39:08.649425 systemd[1]: kubepods-besteffort-pod489c1c0b_9a01_4d83_a65f_9e542bbf37ba.slice: Consumed 1.896s CPU time, 25.4M memory peak, 12K written to disk. Jul 1 08:39:08.649657 containerd[1551]: time="2025-07-01T08:39:08.649198826Z" level=info msg="RemoveContainer for \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\"" Jul 1 08:39:08.657945 containerd[1551]: time="2025-07-01T08:39:08.657840725Z" level=info msg="RemoveContainer for \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\" returns successfully" Jul 1 08:39:08.658714 kubelet[2803]: I0701 08:39:08.658615 2803 scope.go:117] "RemoveContainer" containerID="3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4" Jul 1 08:39:08.664046 containerd[1551]: time="2025-07-01T08:39:08.663884010Z" level=info msg="RemoveContainer for \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\"" Jul 1 08:39:08.690399 containerd[1551]: time="2025-07-01T08:39:08.690352294Z" level=info msg="RemoveContainer for \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\" returns successfully" Jul 1 08:39:08.691902 kubelet[2803]: I0701 08:39:08.691850 2803 scope.go:117] "RemoveContainer" containerID="0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372" Jul 1 08:39:08.696165 containerd[1551]: time="2025-07-01T08:39:08.696114793Z" level=info msg="RemoveContainer for \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\"" Jul 1 08:39:08.701959 containerd[1551]: time="2025-07-01T08:39:08.701648722Z" level=info msg="RemoveContainer for \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\" returns successfully" Jul 1 08:39:08.702603 kubelet[2803]: I0701 08:39:08.702168 2803 scope.go:117] "RemoveContainer" containerID="d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f" Jul 1 08:39:08.707133 containerd[1551]: time="2025-07-01T08:39:08.706926923Z" level=info msg="RemoveContainer for \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\"" Jul 1 08:39:08.720544 containerd[1551]: time="2025-07-01T08:39:08.720480524Z" level=info msg="RemoveContainer for \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\" returns successfully" Jul 1 08:39:08.721399 kubelet[2803]: I0701 08:39:08.721318 2803 scope.go:117] "RemoveContainer" containerID="224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f" Jul 1 08:39:08.722737 containerd[1551]: time="2025-07-01T08:39:08.722648851Z" level=error msg="ContainerStatus for \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\": not found" Jul 1 08:39:08.723374 kubelet[2803]: E0701 08:39:08.723155 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\": not found" containerID="224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f" Jul 1 08:39:08.723483 kubelet[2803]: I0701 08:39:08.723249 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f"} err="failed to get container status \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"224f69c88161e21ec5354dd042001cd4d29de15e53dc5416c24599e435a21a2f\": not found" Jul 1 08:39:08.723678 kubelet[2803]: I0701 08:39:08.723572 2803 scope.go:117] "RemoveContainer" containerID="950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f" Jul 1 08:39:08.723944 containerd[1551]: time="2025-07-01T08:39:08.723914084Z" level=error msg="ContainerStatus for \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\": not found" Jul 1 08:39:08.724257 kubelet[2803]: E0701 08:39:08.724238 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\": not found" containerID="950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f" Jul 1 08:39:08.724428 kubelet[2803]: I0701 08:39:08.724307 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f"} err="failed to get container status \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"950291d8f054a4765d30897e3fbee9c2449e3075a083dad5e963015df41bbc2f\": not found" Jul 1 08:39:08.724428 kubelet[2803]: I0701 08:39:08.724326 2803 scope.go:117] "RemoveContainer" containerID="3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4" Jul 1 08:39:08.724856 containerd[1551]: time="2025-07-01T08:39:08.724735595Z" level=error msg="ContainerStatus for \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\": not found" Jul 1 08:39:08.725035 kubelet[2803]: E0701 08:39:08.725005 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\": not found" containerID="3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4" Jul 1 08:39:08.725250 kubelet[2803]: I0701 08:39:08.725144 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4"} err="failed to get container status \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\": rpc error: code = NotFound desc = an error occurred when try to find container \"3981e798361bce16df11fff65620d1c97144cdbebc72ebe86b4d90494b7b2fe4\": not found" Jul 1 08:39:08.725250 kubelet[2803]: I0701 08:39:08.725165 2803 scope.go:117] "RemoveContainer" containerID="0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372" Jul 1 08:39:08.725882 containerd[1551]: time="2025-07-01T08:39:08.725811243Z" level=error msg="ContainerStatus for \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\": not found" Jul 1 08:39:08.726019 kubelet[2803]: E0701 08:39:08.725980 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\": not found" containerID="0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372" Jul 1 08:39:08.726134 kubelet[2803]: I0701 08:39:08.726023 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372"} err="failed to get container status \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b62731038f1a60be1618307aae046b5b5fc5ad99683a66188af6ad82da56372\": not found" Jul 1 08:39:08.726134 kubelet[2803]: I0701 08:39:08.726050 2803 scope.go:117] "RemoveContainer" containerID="d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f" Jul 1 08:39:08.726463 containerd[1551]: time="2025-07-01T08:39:08.726431256Z" level=error msg="ContainerStatus for \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\": not found" Jul 1 08:39:08.726785 kubelet[2803]: E0701 08:39:08.726765 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\": not found" containerID="d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f" Jul 1 08:39:08.726997 kubelet[2803]: I0701 08:39:08.726913 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f"} err="failed to get container status \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d85efc1545eb04d0648ad3e82233dad1a7234a52525b37a8982366bac7789a0f\": not found" Jul 1 08:39:08.726997 kubelet[2803]: I0701 08:39:08.726935 2803 scope.go:117] "RemoveContainer" containerID="269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c" Jul 1 08:39:08.729827 containerd[1551]: time="2025-07-01T08:39:08.729803941Z" level=info msg="RemoveContainer for \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\"" Jul 1 08:39:08.735422 containerd[1551]: time="2025-07-01T08:39:08.735380992Z" level=info msg="RemoveContainer for \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" returns successfully" Jul 1 08:39:08.735885 kubelet[2803]: I0701 08:39:08.735817 2803 scope.go:117] "RemoveContainer" containerID="6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad" Jul 1 08:39:08.738092 containerd[1551]: time="2025-07-01T08:39:08.738026203Z" level=info msg="RemoveContainer for \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\"" Jul 1 08:39:08.743363 containerd[1551]: time="2025-07-01T08:39:08.743317518Z" level=info msg="RemoveContainer for \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\" returns successfully" Jul 1 08:39:08.745897 kubelet[2803]: I0701 08:39:08.745847 2803 scope.go:117] "RemoveContainer" containerID="269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c" Jul 1 08:39:08.747842 containerd[1551]: time="2025-07-01T08:39:08.747798143Z" level=error msg="ContainerStatus for \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\": not found" Jul 1 08:39:08.748350 kubelet[2803]: E0701 08:39:08.748223 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\": not found" containerID="269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c" Jul 1 08:39:08.748350 kubelet[2803]: I0701 08:39:08.748256 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c"} err="failed to get container status \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"269198c8ff4a294fd583301825a7039260edcc1b9bd3649ad6c4670c12681b0c\": not found" Jul 1 08:39:08.748350 kubelet[2803]: I0701 08:39:08.748280 2803 scope.go:117] "RemoveContainer" containerID="6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad" Jul 1 08:39:08.748758 containerd[1551]: time="2025-07-01T08:39:08.748701357Z" level=error msg="ContainerStatus for \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\": not found" Jul 1 08:39:08.749236 kubelet[2803]: E0701 08:39:08.749166 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\": not found" containerID="6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad" Jul 1 08:39:08.749358 kubelet[2803]: I0701 08:39:08.749250 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad"} err="failed to get container status \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bbf637e6bd2ead3635eeec3a26686cd9be71c48d5ff16745fcc9d2ebccf84ad\": not found" Jul 1 08:39:09.145349 systemd[1]: var-lib-kubelet-pods-489c1c0b\x2d9a01\x2d4d83\x2da65f\x2d9e542bbf37ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d55ll7.mount: Deactivated successfully. Jul 1 08:39:09.145722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1-shm.mount: Deactivated successfully. Jul 1 08:39:09.146822 systemd[1]: var-lib-kubelet-pods-26f99ec5\x2d703e\x2d4073\x2db4cc\x2da22c44f1ac1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8wbmj.mount: Deactivated successfully. Jul 1 08:39:09.147328 systemd[1]: var-lib-kubelet-pods-26f99ec5\x2d703e\x2d4073\x2db4cc\x2da22c44f1ac1a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 1 08:39:09.147863 systemd[1]: var-lib-kubelet-pods-26f99ec5\x2d703e\x2d4073\x2db4cc\x2da22c44f1ac1a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 1 08:39:09.475984 containerd[1551]: time="2025-07-01T08:39:09.475868994Z" level=info msg="StopPodSandbox for \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\"" Jul 1 08:39:09.479114 containerd[1551]: time="2025-07-01T08:39:09.478218971Z" level=info msg="TearDown network for sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" successfully" Jul 1 08:39:09.479245 containerd[1551]: time="2025-07-01T08:39:09.479193008Z" level=info msg="StopPodSandbox for \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" returns successfully" Jul 1 08:39:09.482235 containerd[1551]: time="2025-07-01T08:39:09.482151647Z" level=info msg="RemovePodSandbox for \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\"" Jul 1 08:39:09.482398 containerd[1551]: time="2025-07-01T08:39:09.482254670Z" level=info msg="Forcibly stopping sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\"" Jul 1 08:39:09.482600 containerd[1551]: time="2025-07-01T08:39:09.482518466Z" level=info msg="TearDown network for sandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" successfully" Jul 1 08:39:09.484868 kubelet[2803]: I0701 08:39:09.484774 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f99ec5-703e-4073-b4cc-a22c44f1ac1a" path="/var/lib/kubelet/pods/26f99ec5-703e-4073-b4cc-a22c44f1ac1a/volumes" Jul 1 08:39:09.487755 kubelet[2803]: I0701 08:39:09.487037 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489c1c0b-9a01-4d83-a65f-9e542bbf37ba" path="/var/lib/kubelet/pods/489c1c0b-9a01-4d83-a65f-9e542bbf37ba/volumes" Jul 1 08:39:09.492046 containerd[1551]: time="2025-07-01T08:39:09.491965665Z" level=info msg="Ensure that sandbox 5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1 in task-service has been cleanup successfully" Jul 1 08:39:09.498381 containerd[1551]: time="2025-07-01T08:39:09.498263929Z" level=info msg="RemovePodSandbox \"5818def94c069db0a1f18df96409a6861af16d5f857bbd47155d4d672f9773a1\" returns successfully" Jul 1 08:39:09.499463 containerd[1551]: time="2025-07-01T08:39:09.499388758Z" level=info msg="StopPodSandbox for \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\"" Jul 1 08:39:09.500422 containerd[1551]: time="2025-07-01T08:39:09.500375519Z" level=info msg="TearDown network for sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" successfully" Jul 1 08:39:09.501050 containerd[1551]: time="2025-07-01T08:39:09.500611932Z" level=info msg="StopPodSandbox for \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" returns successfully" Jul 1 08:39:09.501577 containerd[1551]: time="2025-07-01T08:39:09.501523151Z" level=info msg="RemovePodSandbox for \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\"" Jul 1 08:39:09.503252 containerd[1551]: time="2025-07-01T08:39:09.502008121Z" level=info msg="Forcibly stopping sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\"" Jul 1 08:39:09.503252 containerd[1551]: time="2025-07-01T08:39:09.502290320Z" level=info msg="TearDown network for sandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" successfully" Jul 1 08:39:09.505806 containerd[1551]: time="2025-07-01T08:39:09.505676672Z" level=info msg="Ensure that sandbox bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110 in task-service has been cleanup successfully" Jul 1 08:39:09.512099 containerd[1551]: time="2025-07-01T08:39:09.512017596Z" level=info msg="RemovePodSandbox \"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" returns successfully" Jul 1 08:39:09.757619 kubelet[2803]: E0701 08:39:09.757342 2803 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 1 08:39:10.049214 sshd[4708]: Connection closed by 172.24.4.1 port 52048 Jul 1 08:39:10.051789 sshd-session[4705]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:10.063828 containerd[1551]: time="2025-07-01T08:39:10.063166324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" id:\"bfb209a01d9ea82228402fbfcef03034c0d9a7e7e038811ef56c3357e11ba110\" pid:3030 exit_status:137 exited_at:{seconds:1751359148 nanos:183584800}" Jul 1 08:39:10.079618 systemd[1]: sshd@26-172.24.4.49:22-172.24.4.1:52048.service: Deactivated successfully. Jul 1 08:39:10.085764 systemd[1]: session-29.scope: Deactivated successfully. Jul 1 08:39:10.088541 systemd[1]: session-29.scope: Consumed 1.017s CPU time, 23.8M memory peak. Jul 1 08:39:10.091112 systemd-logind[1530]: Session 29 logged out. Waiting for processes to exit. Jul 1 08:39:10.100381 systemd[1]: Started sshd@27-172.24.4.49:22-172.24.4.1:52050.service - OpenSSH per-connection server daemon (172.24.4.1:52050). Jul 1 08:39:10.104267 systemd-logind[1530]: Removed session 29. Jul 1 08:39:11.388485 sshd[4863]: Accepted publickey for core from 172.24.4.1 port 52050 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:39:11.391833 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:11.413610 systemd-logind[1530]: New session 30 of user core. Jul 1 08:39:11.429550 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 1 08:39:13.012098 sshd[4868]: Connection closed by 172.24.4.1 port 52050 Jul 1 08:39:13.013453 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:13.030042 systemd[1]: sshd@27-172.24.4.49:22-172.24.4.1:52050.service: Deactivated successfully. Jul 1 08:39:13.033686 systemd[1]: session-30.scope: Deactivated successfully. Jul 1 08:39:13.038199 systemd-logind[1530]: Session 30 logged out. Waiting for processes to exit. Jul 1 08:39:13.046620 systemd[1]: Started sshd@28-172.24.4.49:22-172.24.4.1:52056.service - OpenSSH per-connection server daemon (172.24.4.1:52056). Jul 1 08:39:13.053521 systemd-logind[1530]: Removed session 30. Jul 1 08:39:13.078823 kubelet[2803]: I0701 08:39:13.078739 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-host-proc-sys-net\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.079402 kubelet[2803]: I0701 08:39:13.078851 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-cilium-run\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.081147 kubelet[2803]: I0701 08:39:13.081109 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-xtables-lock\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082095 kubelet[2803]: I0701 08:39:13.081431 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/96f3ee62-1a5c-478b-9a1f-6567611ce07f-cilium-ipsec-secrets\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082095 kubelet[2803]: I0701 08:39:13.081514 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96f3ee62-1a5c-478b-9a1f-6567611ce07f-hubble-tls\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082095 kubelet[2803]: I0701 08:39:13.081569 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-bpf-maps\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082095 kubelet[2803]: I0701 08:39:13.081648 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-cni-path\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082095 kubelet[2803]: I0701 08:39:13.081759 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-lib-modules\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082095 kubelet[2803]: I0701 08:39:13.081974 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96f3ee62-1a5c-478b-9a1f-6567611ce07f-cilium-config-path\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082422 kubelet[2803]: I0701 08:39:13.082000 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-etc-cni-netd\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082422 kubelet[2803]: I0701 08:39:13.082089 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsnz9\" (UniqueName: \"kubernetes.io/projected/96f3ee62-1a5c-478b-9a1f-6567611ce07f-kube-api-access-xsnz9\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082422 kubelet[2803]: I0701 08:39:13.082139 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96f3ee62-1a5c-478b-9a1f-6567611ce07f-clustermesh-secrets\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082422 kubelet[2803]: I0701 08:39:13.082172 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-host-proc-sys-kernel\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082623 kubelet[2803]: I0701 08:39:13.082486 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-hostproc\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.082623 kubelet[2803]: I0701 08:39:13.082523 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96f3ee62-1a5c-478b-9a1f-6567611ce07f-cilium-cgroup\") pod \"cilium-52zbq\" (UID: \"96f3ee62-1a5c-478b-9a1f-6567611ce07f\") " pod="kube-system/cilium-52zbq" Jul 1 08:39:13.099933 systemd[1]: Created slice kubepods-burstable-pod96f3ee62_1a5c_478b_9a1f_6567611ce07f.slice - libcontainer container kubepods-burstable-pod96f3ee62_1a5c_478b_9a1f_6567611ce07f.slice. Jul 1 08:39:13.418292 containerd[1551]: time="2025-07-01T08:39:13.417678901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52zbq,Uid:96f3ee62-1a5c-478b-9a1f-6567611ce07f,Namespace:kube-system,Attempt:0,}" Jul 1 08:39:13.508571 containerd[1551]: time="2025-07-01T08:39:13.508442180Z" level=info msg="connecting to shim 53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e" address="unix:///run/containerd/s/083c93817c12d66946d7a27a5728d424c1007facc8fb1f5eb674c2b36b28f8ec" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:39:13.555421 systemd[1]: Started cri-containerd-53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e.scope - libcontainer container 53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e. Jul 1 08:39:13.622265 containerd[1551]: time="2025-07-01T08:39:13.622146138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52zbq,Uid:96f3ee62-1a5c-478b-9a1f-6567611ce07f,Namespace:kube-system,Attempt:0,} returns sandbox id \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\"" Jul 1 08:39:13.637802 containerd[1551]: time="2025-07-01T08:39:13.637722473Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 1 08:39:13.655296 containerd[1551]: time="2025-07-01T08:39:13.655131877Z" level=info msg="Container 07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:13.669811 containerd[1551]: time="2025-07-01T08:39:13.669464500Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323\"" Jul 1 08:39:13.671550 containerd[1551]: time="2025-07-01T08:39:13.671452288Z" level=info msg="StartContainer for \"07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323\"" Jul 1 08:39:13.677131 containerd[1551]: time="2025-07-01T08:39:13.677083982Z" level=info msg="connecting to shim 07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323" address="unix:///run/containerd/s/083c93817c12d66946d7a27a5728d424c1007facc8fb1f5eb674c2b36b28f8ec" protocol=ttrpc version=3 Jul 1 08:39:13.707253 systemd[1]: Started cri-containerd-07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323.scope - libcontainer container 07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323. Jul 1 08:39:13.755731 containerd[1551]: time="2025-07-01T08:39:13.755655534Z" level=info msg="StartContainer for \"07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323\" returns successfully" Jul 1 08:39:13.784604 systemd[1]: cri-containerd-07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323.scope: Deactivated successfully. Jul 1 08:39:13.791793 containerd[1551]: time="2025-07-01T08:39:13.791484085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323\" id:\"07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323\" pid:4942 exited_at:{seconds:1751359153 nanos:788972424}" Jul 1 08:39:13.791793 containerd[1551]: time="2025-07-01T08:39:13.791620932Z" level=info msg="received exit event container_id:\"07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323\" id:\"07cf39eca36df6112822d172b4c5a277e24d9e9512af89ebc6dca2dc5b435323\" pid:4942 exited_at:{seconds:1751359153 nanos:788972424}" Jul 1 08:39:14.647876 sshd[4878]: Accepted publickey for core from 172.24.4.1 port 52056 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:39:14.666489 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:14.699559 systemd-logind[1530]: New session 31 of user core. Jul 1 08:39:14.704509 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 1 08:39:14.746977 containerd[1551]: time="2025-07-01T08:39:14.746930650Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 1 08:39:14.760484 kubelet[2803]: E0701 08:39:14.760354 2803 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 1 08:39:14.777618 containerd[1551]: time="2025-07-01T08:39:14.777491402Z" level=info msg="Container 53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:14.778452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501575025.mount: Deactivated successfully. Jul 1 08:39:14.785029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718795319.mount: Deactivated successfully. Jul 1 08:39:14.800443 containerd[1551]: time="2025-07-01T08:39:14.800363261Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c\"" Jul 1 08:39:14.802082 containerd[1551]: time="2025-07-01T08:39:14.801614478Z" level=info msg="StartContainer for \"53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c\"" Jul 1 08:39:14.803823 containerd[1551]: time="2025-07-01T08:39:14.803765562Z" level=info msg="connecting to shim 53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c" address="unix:///run/containerd/s/083c93817c12d66946d7a27a5728d424c1007facc8fb1f5eb674c2b36b28f8ec" protocol=ttrpc version=3 Jul 1 08:39:14.847609 systemd[1]: Started cri-containerd-53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c.scope - libcontainer container 53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c. Jul 1 08:39:14.915284 containerd[1551]: time="2025-07-01T08:39:14.914954613Z" level=info msg="StartContainer for \"53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c\" returns successfully" Jul 1 08:39:14.924505 systemd[1]: cri-containerd-53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c.scope: Deactivated successfully. Jul 1 08:39:14.930253 containerd[1551]: time="2025-07-01T08:39:14.929737861Z" level=info msg="received exit event container_id:\"53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c\" id:\"53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c\" pid:4987 exited_at:{seconds:1751359154 nanos:928930587}" Jul 1 08:39:14.930790 containerd[1551]: time="2025-07-01T08:39:14.930761732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c\" id:\"53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c\" pid:4987 exited_at:{seconds:1751359154 nanos:928930587}" Jul 1 08:39:15.203942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53fec9bcc6e44b898a21eeecceb1b342211299ed00eb4d1e4fbbfde71b00055c-rootfs.mount: Deactivated successfully. Jul 1 08:39:15.239936 sshd[4974]: Connection closed by 172.24.4.1 port 52056 Jul 1 08:39:15.242255 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:15.263636 systemd[1]: sshd@28-172.24.4.49:22-172.24.4.1:52056.service: Deactivated successfully. Jul 1 08:39:15.275772 systemd[1]: session-31.scope: Deactivated successfully. Jul 1 08:39:15.280020 systemd-logind[1530]: Session 31 logged out. Waiting for processes to exit. Jul 1 08:39:15.292118 systemd[1]: Started sshd@29-172.24.4.49:22-172.24.4.1:47498.service - OpenSSH per-connection server daemon (172.24.4.1:47498). Jul 1 08:39:15.298597 systemd-logind[1530]: Removed session 31. Jul 1 08:39:15.721692 containerd[1551]: time="2025-07-01T08:39:15.721526183Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 1 08:39:15.766087 containerd[1551]: time="2025-07-01T08:39:15.764318105Z" level=info msg="Container 4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:15.788086 containerd[1551]: time="2025-07-01T08:39:15.787941465Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17\"" Jul 1 08:39:15.791251 containerd[1551]: time="2025-07-01T08:39:15.790705068Z" level=info msg="StartContainer for \"4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17\"" Jul 1 08:39:15.795141 containerd[1551]: time="2025-07-01T08:39:15.795100012Z" level=info msg="connecting to shim 4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17" address="unix:///run/containerd/s/083c93817c12d66946d7a27a5728d424c1007facc8fb1f5eb674c2b36b28f8ec" protocol=ttrpc version=3 Jul 1 08:39:15.842321 systemd[1]: Started cri-containerd-4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17.scope - libcontainer container 4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17. Jul 1 08:39:15.913135 containerd[1551]: time="2025-07-01T08:39:15.912233642Z" level=info msg="received exit event container_id:\"4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17\" id:\"4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17\" pid:5043 exited_at:{seconds:1751359155 nanos:912002548}" Jul 1 08:39:15.912287 systemd[1]: cri-containerd-4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17.scope: Deactivated successfully. Jul 1 08:39:15.914805 containerd[1551]: time="2025-07-01T08:39:15.914654563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17\" id:\"4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17\" pid:5043 exited_at:{seconds:1751359155 nanos:912002548}" Jul 1 08:39:15.918091 containerd[1551]: time="2025-07-01T08:39:15.915200928Z" level=info msg="StartContainer for \"4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17\" returns successfully" Jul 1 08:39:15.948183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b7e43f60c318dc4dbc5b8b3900598e982f6045fdecb153e349ac90ac61e6c17-rootfs.mount: Deactivated successfully. Jul 1 08:39:16.316180 sshd[5025]: Accepted publickey for core from 172.24.4.1 port 47498 ssh2: RSA SHA256:OtMLMno53upG6UEgyPzObkH0Sb4RVcHs4vv+qZKdqJo Jul 1 08:39:16.319226 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:16.337242 systemd-logind[1530]: New session 32 of user core. Jul 1 08:39:16.346485 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 1 08:39:16.733186 containerd[1551]: time="2025-07-01T08:39:16.732941426Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 1 08:39:16.766125 containerd[1551]: time="2025-07-01T08:39:16.764482626Z" level=info msg="Container b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:16.772528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142616327.mount: Deactivated successfully. Jul 1 08:39:16.786108 containerd[1551]: time="2025-07-01T08:39:16.786042354Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642\"" Jul 1 08:39:16.787300 containerd[1551]: time="2025-07-01T08:39:16.787276319Z" level=info msg="StartContainer for \"b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642\"" Jul 1 08:39:16.788804 containerd[1551]: time="2025-07-01T08:39:16.788776673Z" level=info msg="connecting to shim b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642" address="unix:///run/containerd/s/083c93817c12d66946d7a27a5728d424c1007facc8fb1f5eb674c2b36b28f8ec" protocol=ttrpc version=3 Jul 1 08:39:16.845289 systemd[1]: Started cri-containerd-b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642.scope - libcontainer container b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642. Jul 1 08:39:16.938254 systemd[1]: cri-containerd-b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642.scope: Deactivated successfully. Jul 1 08:39:16.944374 containerd[1551]: time="2025-07-01T08:39:16.944335164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642\" id:\"b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642\" pid:5087 exited_at:{seconds:1751359156 nanos:943837301}" Jul 1 08:39:16.946590 containerd[1551]: time="2025-07-01T08:39:16.946459239Z" level=info msg="received exit event container_id:\"b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642\" id:\"b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642\" pid:5087 exited_at:{seconds:1751359156 nanos:943837301}" Jul 1 08:39:16.987087 containerd[1551]: time="2025-07-01T08:39:16.986437605Z" level=info msg="StartContainer for \"b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642\" returns successfully" Jul 1 08:39:17.012124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b40806cb4fcf8726df8d156b432d2936b72b47d59a6114322e6ed240a0fab642-rootfs.mount: Deactivated successfully. Jul 1 08:39:17.755636 containerd[1551]: time="2025-07-01T08:39:17.754504232Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 1 08:39:17.793130 containerd[1551]: time="2025-07-01T08:39:17.791753999Z" level=info msg="Container 684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:17.832372 containerd[1551]: time="2025-07-01T08:39:17.832256447Z" level=info msg="CreateContainer within sandbox \"53807ca7d21121c4dd2537cb1c593ecb866995041f07d748ebeab3e5cefeba6e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\"" Jul 1 08:39:17.835838 containerd[1551]: time="2025-07-01T08:39:17.834330267Z" level=info msg="StartContainer for \"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\"" Jul 1 08:39:17.839202 containerd[1551]: time="2025-07-01T08:39:17.839156039Z" level=info msg="connecting to shim 684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e" address="unix:///run/containerd/s/083c93817c12d66946d7a27a5728d424c1007facc8fb1f5eb674c2b36b28f8ec" protocol=ttrpc version=3 Jul 1 08:39:17.888277 systemd[1]: Started cri-containerd-684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e.scope - libcontainer container 684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e. Jul 1 08:39:17.950222 containerd[1551]: time="2025-07-01T08:39:17.949236190Z" level=info msg="StartContainer for \"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\" returns successfully" Jul 1 08:39:18.083281 containerd[1551]: time="2025-07-01T08:39:18.083165163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\" id:\"146cd0ee3a138d7c241389adde045812bf34953b125f1b91ff18dbf5364bb7dc\" pid:5156 exited_at:{seconds:1751359158 nanos:82813924}" Jul 1 08:39:18.475002 kubelet[2803]: E0701 08:39:18.474624 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-sb9xv" podUID="79547f0c-31b5-4b7b-b62a-0df562a1e3a5" Jul 1 08:39:18.598732 kubelet[2803]: I0701 08:39:18.598627 2803 setters.go:618] "Node became not ready" node="ci-9999-9-9-s-39d8ad6622.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-01T08:39:18Z","lastTransitionTime":"2025-07-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 1 08:39:18.685213 kernel: cryptd: max_cpu_qlen set to 1000 Jul 1 08:39:18.771148 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jul 1 08:39:18.847877 kubelet[2803]: I0701 08:39:18.847728 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-52zbq" podStartSLOduration=6.847681742 podStartE2EDuration="6.847681742s" podCreationTimestamp="2025-07-01 08:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:39:18.832963645 +0000 UTC m=+249.665619607" watchObservedRunningTime="2025-07-01 08:39:18.847681742 +0000 UTC m=+249.680337714" Jul 1 08:39:19.354833 containerd[1551]: time="2025-07-01T08:39:19.354717690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\" id:\"9d357f09fe73f11acba1f8727d411d3eab695dc56f4eec457b7a7e81b36055cf\" pid:5256 exit_status:1 exited_at:{seconds:1751359159 nanos:354047593}" Jul 1 08:39:21.538814 containerd[1551]: time="2025-07-01T08:39:21.538756413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\" id:\"706359b247dba61610c5f41e0ecab3baddd0f58521d34df7e8d427ccb2d94653\" pid:5409 exit_status:1 exited_at:{seconds:1751359161 nanos:538330885}" Jul 1 08:39:22.671793 systemd-networkd[1438]: lxc_health: Link UP Jul 1 08:39:22.675783 systemd-networkd[1438]: lxc_health: Gained carrier Jul 1 08:39:23.761072 containerd[1551]: time="2025-07-01T08:39:23.760964780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\" id:\"cba298d45f4ac72b4e084438de49e6691d55135df0a868f0d08c50ef48bbb075\" pid:5724 exited_at:{seconds:1751359163 nanos:760646513}" Jul 1 08:39:24.018456 systemd-networkd[1438]: lxc_health: Gained IPv6LL Jul 1 08:39:26.047579 containerd[1551]: time="2025-07-01T08:39:26.047513926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\" id:\"6a933ec5a0087687616bce4d7ccdb30d629a050af38b9055fe5b8948979e38b9\" pid:5748 exited_at:{seconds:1751359166 nanos:47146897}" Jul 1 08:39:28.280380 containerd[1551]: time="2025-07-01T08:39:28.280330870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"684804d1b03534657d046d350570fd878fa1d51d6f99869608133e95cb8c0d1e\" id:\"bfc7281d22ac75a1a6ed17d7167014ca8a69bbe3b8cbc150629df09ea3c9aedb\" pid:5784 exited_at:{seconds:1751359168 nanos:279607874}" Jul 1 08:39:28.601949 sshd[5071]: Connection closed by 172.24.4.1 port 47498 Jul 1 08:39:28.603676 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:28.611386 systemd-logind[1530]: Session 32 logged out. Waiting for processes to exit. Jul 1 08:39:28.614387 systemd[1]: sshd@29-172.24.4.49:22-172.24.4.1:47498.service: Deactivated successfully. Jul 1 08:39:28.618800 systemd[1]: session-32.scope: Deactivated successfully. Jul 1 08:39:28.624431 systemd-logind[1530]: Removed session 32.