Jul 10 00:21:05.879355 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:21:05.879385 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:21:05.879396 kernel: BIOS-provided physical RAM map: Jul 10 00:21:05.879404 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 10 00:21:05.879411 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 10 00:21:05.879418 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 10 00:21:05.879427 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 10 00:21:05.879440 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 10 00:21:05.879451 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:21:05.879459 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 10 00:21:05.879466 kernel: NX (Execute Disable) protection: active Jul 10 00:21:05.879474 kernel: APIC: Static calls initialized Jul 10 00:21:05.879481 kernel: SMBIOS 2.8 present. Jul 10 00:21:05.879489 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 10 00:21:05.879502 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:21:05.879510 kernel: Hypervisor detected: KVM Jul 10 00:21:05.879522 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:21:05.879531 kernel: kvm-clock: using sched offset of 4817234829 cycles Jul 10 00:21:05.879540 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:21:05.879549 kernel: tsc: Detected 2494.136 MHz processor Jul 10 00:21:05.879558 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:21:05.879567 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:21:05.879576 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 10 00:21:05.879588 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 10 00:21:05.879597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:21:05.879605 kernel: ACPI: Early table checksum verification disabled Jul 10 00:21:05.879614 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 10 00:21:05.879623 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:05.879631 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:05.879655 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:05.879664 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 10 00:21:05.879673 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:05.879686 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:05.879695 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:05.879703 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:05.879712 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 10 00:21:05.879720 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 10 00:21:05.879729 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 10 00:21:05.879737 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 10 00:21:05.879746 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 10 00:21:05.879762 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 10 00:21:05.879771 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 10 00:21:05.879780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 10 00:21:05.879789 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 10 00:21:05.879798 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jul 10 00:21:05.879828 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jul 10 00:21:05.879842 kernel: Zone ranges: Jul 10 00:21:05.879855 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:21:05.879869 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 10 00:21:05.879883 kernel: Normal empty Jul 10 00:21:05.879896 kernel: Device empty Jul 10 00:21:05.879905 kernel: Movable zone start for each node Jul 10 00:21:05.879914 kernel: Early memory node ranges Jul 10 00:21:05.879923 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 10 00:21:05.879932 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 10 00:21:05.879948 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 10 00:21:05.879962 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:21:05.879974 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 00:21:05.879989 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 10 00:21:05.880007 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:21:05.880025 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:21:05.880049 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:21:05.880157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:21:05.880214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:21:05.880244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:21:05.880269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:21:05.880287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:21:05.880306 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:21:05.880325 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:21:05.880344 kernel: TSC deadline timer available Jul 10 00:21:05.880363 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:21:05.880382 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:21:05.880453 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:21:05.880471 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:21:05.880497 kernel: CPU topo: Num. cores per package: 2 Jul 10 00:21:05.880514 kernel: CPU topo: Num. threads per package: 2 Jul 10 00:21:05.880532 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 00:21:05.880551 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:21:05.880569 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 10 00:21:05.880587 kernel: Booting paravirtualized kernel on KVM Jul 10 00:21:05.880607 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:21:05.880627 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 00:21:05.880668 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 00:21:05.880692 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 00:21:05.880705 kernel: pcpu-alloc: [0] 0 1 Jul 10 00:21:05.880719 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 10 00:21:05.880736 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:21:05.880750 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:21:05.880764 kernel: random: crng init done Jul 10 00:21:05.880777 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:21:05.880792 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:21:05.880813 kernel: Fallback order for Node 0: 0 Jul 10 00:21:05.880823 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jul 10 00:21:05.880832 kernel: Policy zone: DMA32 Jul 10 00:21:05.880841 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:21:05.880850 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:21:05.880859 kernel: Kernel/User page tables isolation: enabled Jul 10 00:21:05.880868 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:21:05.880877 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:21:05.880886 kernel: Dynamic Preempt: voluntary Jul 10 00:21:05.880899 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:21:05.880909 kernel: rcu: RCU event tracing is enabled. Jul 10 00:21:05.880918 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:21:05.880927 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:21:05.880936 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:21:05.880945 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:21:05.880954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:21:05.880963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:21:05.880972 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:21:05.880990 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:21:05.881000 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:21:05.881009 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 10 00:21:05.881021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:21:05.881035 kernel: Console: colour VGA+ 80x25 Jul 10 00:21:05.881049 kernel: printk: legacy console [tty0] enabled Jul 10 00:21:05.881059 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:21:05.881068 kernel: ACPI: Core revision 20240827 Jul 10 00:21:05.881078 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:21:05.881099 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:21:05.881109 kernel: x2apic enabled Jul 10 00:21:05.881119 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:21:05.881131 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:21:05.881144 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Jul 10 00:21:05.881171 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Jul 10 00:21:05.881180 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 10 00:21:05.881190 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 10 00:21:05.881200 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:21:05.881213 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:21:05.881223 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:21:05.881232 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 10 00:21:05.881242 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:21:05.881251 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:21:05.881261 kernel: MDS: Mitigation: Clear CPU buffers Jul 10 00:21:05.881271 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 10 00:21:05.881283 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:21:05.881293 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:21:05.881303 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:21:05.881312 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:21:05.881322 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:21:05.881332 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 10 00:21:05.881341 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:21:05.881350 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:21:05.881360 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:21:05.881373 kernel: landlock: Up and running. Jul 10 00:21:05.881382 kernel: SELinux: Initializing. Jul 10 00:21:05.881392 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:21:05.881401 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:21:05.881411 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 10 00:21:05.881420 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 10 00:21:05.881430 kernel: signal: max sigframe size: 1776 Jul 10 00:21:05.881439 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:21:05.881449 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:21:05.881462 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:21:05.881471 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:21:05.881481 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:21:05.881490 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:21:05.881516 kernel: .... node #0, CPUs: #1 Jul 10 00:21:05.881527 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:21:05.881537 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Jul 10 00:21:05.881547 kernel: Memory: 1966904K/2096612K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 125144K reserved, 0K cma-reserved) Jul 10 00:21:05.881556 kernel: devtmpfs: initialized Jul 10 00:21:05.881571 kernel: x86/mm: Memory block size: 128MB Jul 10 00:21:05.881587 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:21:05.881602 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:21:05.881614 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:21:05.881628 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:21:05.881655 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:21:05.881669 kernel: audit: type=2000 audit(1752106862.594:1): state=initialized audit_enabled=0 res=1 Jul 10 00:21:05.881684 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:21:05.881697 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:21:05.881720 kernel: cpuidle: using governor menu Jul 10 00:21:05.881735 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:21:05.881748 kernel: dca service started, version 1.12.1 Jul 10 00:21:05.881757 kernel: PCI: Using configuration type 1 for base access Jul 10 00:21:05.881767 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:21:05.881777 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:21:05.881786 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:21:05.881796 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:21:05.881805 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:21:05.881821 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:21:05.881839 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:21:05.881852 kernel: ACPI: Interpreter enabled Jul 10 00:21:05.881866 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:21:05.881879 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:21:05.881894 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:21:05.881909 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:21:05.881919 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 10 00:21:05.881929 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:21:05.882273 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:21:05.882395 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 10 00:21:05.882544 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 10 00:21:05.882562 kernel: acpiphp: Slot [3] registered Jul 10 00:21:05.882576 kernel: acpiphp: Slot [4] registered Jul 10 00:21:05.882589 kernel: acpiphp: Slot [5] registered Jul 10 00:21:05.882603 kernel: acpiphp: Slot [6] registered Jul 10 00:21:05.882626 kernel: acpiphp: Slot [7] registered Jul 10 00:21:05.882649 kernel: acpiphp: Slot [8] registered Jul 10 00:21:05.882660 kernel: acpiphp: Slot [9] registered Jul 10 00:21:05.882670 kernel: acpiphp: Slot [10] registered Jul 10 00:21:05.882679 kernel: acpiphp: Slot [11] registered Jul 10 00:21:05.882689 kernel: acpiphp: Slot [12] registered Jul 10 00:21:05.882698 kernel: acpiphp: Slot [13] registered Jul 10 00:21:05.882708 kernel: acpiphp: Slot [14] registered Jul 10 00:21:05.882717 kernel: acpiphp: Slot [15] registered Jul 10 00:21:05.882727 kernel: acpiphp: Slot [16] registered Jul 10 00:21:05.882741 kernel: acpiphp: Slot [17] registered Jul 10 00:21:05.882751 kernel: acpiphp: Slot [18] registered Jul 10 00:21:05.882761 kernel: acpiphp: Slot [19] registered Jul 10 00:21:05.882770 kernel: acpiphp: Slot [20] registered Jul 10 00:21:05.882780 kernel: acpiphp: Slot [21] registered Jul 10 00:21:05.882789 kernel: acpiphp: Slot [22] registered Jul 10 00:21:05.882799 kernel: acpiphp: Slot [23] registered Jul 10 00:21:05.882808 kernel: acpiphp: Slot [24] registered Jul 10 00:21:05.882818 kernel: acpiphp: Slot [25] registered Jul 10 00:21:05.882831 kernel: acpiphp: Slot [26] registered Jul 10 00:21:05.882841 kernel: acpiphp: Slot [27] registered Jul 10 00:21:05.882850 kernel: acpiphp: Slot [28] registered Jul 10 00:21:05.882860 kernel: acpiphp: Slot [29] registered Jul 10 00:21:05.882869 kernel: acpiphp: Slot [30] registered Jul 10 00:21:05.882879 kernel: acpiphp: Slot [31] registered Jul 10 00:21:05.882889 kernel: PCI host bridge to bus 0000:00 Jul 10 00:21:05.883092 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:21:05.883224 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:21:05.883340 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:21:05.883439 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 10 00:21:05.883535 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 10 00:21:05.883634 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:21:05.883912 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:21:05.884067 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:21:05.884197 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 10 00:21:05.884317 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jul 10 00:21:05.884423 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 10 00:21:05.884528 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 10 00:21:05.884633 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 10 00:21:05.884779 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 10 00:21:05.884954 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jul 10 00:21:05.885081 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jul 10 00:21:05.885199 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 10 00:21:05.885305 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 10 00:21:05.885411 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 10 00:21:05.885538 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:21:05.885658 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 10 00:21:05.885772 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jul 10 00:21:05.885898 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jul 10 00:21:05.886002 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jul 10 00:21:05.886107 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:21:05.886231 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:21:05.886336 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jul 10 00:21:05.886486 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jul 10 00:21:05.886613 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jul 10 00:21:05.886746 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:21:05.886864 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jul 10 00:21:05.886969 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jul 10 00:21:05.887073 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 10 00:21:05.887203 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:21:05.887315 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jul 10 00:21:05.887447 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jul 10 00:21:05.887552 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 10 00:21:05.887714 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:21:05.887841 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jul 10 00:21:05.887964 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jul 10 00:21:05.888067 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jul 10 00:21:05.888187 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:21:05.888300 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jul 10 00:21:05.888402 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jul 10 00:21:05.888532 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jul 10 00:21:05.888692 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 00:21:05.888799 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jul 10 00:21:05.888948 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 10 00:21:05.888967 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:21:05.888982 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:21:05.888996 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:21:05.889012 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:21:05.889027 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 10 00:21:05.889037 kernel: iommu: Default domain type: Translated Jul 10 00:21:05.889047 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:21:05.889057 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:21:05.889074 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:21:05.889084 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 10 00:21:05.889094 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 10 00:21:05.889239 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 10 00:21:05.889353 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 10 00:21:05.889460 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:21:05.889473 kernel: vgaarb: loaded Jul 10 00:21:05.889483 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:21:05.889493 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:21:05.889514 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:21:05.889528 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:21:05.889542 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:21:05.889555 kernel: pnp: PnP ACPI init Jul 10 00:21:05.889568 kernel: pnp: PnP ACPI: found 4 devices Jul 10 00:21:05.889583 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:21:05.889597 kernel: NET: Registered PF_INET protocol family Jul 10 00:21:05.889612 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:21:05.889686 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 10 00:21:05.889708 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:21:05.889721 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:21:05.889734 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 10 00:21:05.889748 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 10 00:21:05.889762 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:21:05.889776 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:21:05.889790 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:21:05.889804 kernel: NET: Registered PF_XDP protocol family Jul 10 00:21:05.890002 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:21:05.890137 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:21:05.890262 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:21:05.890369 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 10 00:21:05.890500 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 10 00:21:05.890623 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 10 00:21:05.890768 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 10 00:21:05.890783 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 10 00:21:05.890927 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27311 usecs Jul 10 00:21:05.890942 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:21:05.890952 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 10 00:21:05.890963 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Jul 10 00:21:05.890973 kernel: Initialise system trusted keyrings Jul 10 00:21:05.890983 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 10 00:21:05.890994 kernel: Key type asymmetric registered Jul 10 00:21:05.891003 kernel: Asymmetric key parser 'x509' registered Jul 10 00:21:05.891014 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:21:05.891029 kernel: io scheduler mq-deadline registered Jul 10 00:21:05.891040 kernel: io scheduler kyber registered Jul 10 00:21:05.891050 kernel: io scheduler bfq registered Jul 10 00:21:05.891060 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:21:05.891070 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 10 00:21:05.891080 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 10 00:21:05.891090 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 10 00:21:05.891100 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:21:05.891110 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:21:05.891124 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:21:05.891134 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:21:05.891143 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:21:05.891306 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 10 00:21:05.891323 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:21:05.891449 kernel: rtc_cmos 00:03: registered as rtc0 Jul 10 00:21:05.891554 kernel: rtc_cmos 00:03: setting system clock to 2025-07-10T00:21:05 UTC (1752106865) Jul 10 00:21:05.891696 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 10 00:21:05.891716 kernel: intel_pstate: CPU model not supported Jul 10 00:21:05.891726 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:21:05.891736 kernel: Segment Routing with IPv6 Jul 10 00:21:05.891745 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:21:05.891755 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:21:05.891765 kernel: Key type dns_resolver registered Jul 10 00:21:05.891775 kernel: IPI shorthand broadcast: enabled Jul 10 00:21:05.891785 kernel: sched_clock: Marking stable (3245002469, 90957680)->(3354900915, -18940766) Jul 10 00:21:05.891795 kernel: registered taskstats version 1 Jul 10 00:21:05.891808 kernel: Loading compiled-in X.509 certificates Jul 10 00:21:05.891845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:21:05.891859 kernel: Demotion targets for Node 0: null Jul 10 00:21:05.891874 kernel: Key type .fscrypt registered Jul 10 00:21:05.891884 kernel: Key type fscrypt-provisioning registered Jul 10 00:21:05.891899 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:21:05.891928 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:21:05.891942 kernel: ima: No architecture policies found Jul 10 00:21:05.891952 kernel: clk: Disabling unused clocks Jul 10 00:21:05.891965 kernel: Warning: unable to open an initial console. Jul 10 00:21:05.891976 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:21:05.891986 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:21:05.891996 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:21:05.892006 kernel: Run /init as init process Jul 10 00:21:05.892017 kernel: with arguments: Jul 10 00:21:05.892027 kernel: /init Jul 10 00:21:05.892036 kernel: with environment: Jul 10 00:21:05.892046 kernel: HOME=/ Jul 10 00:21:05.892059 kernel: TERM=linux Jul 10 00:21:05.892069 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:21:05.892081 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:21:05.892095 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:21:05.892106 systemd[1]: Detected virtualization kvm. Jul 10 00:21:05.892116 systemd[1]: Detected architecture x86-64. Jul 10 00:21:05.892127 systemd[1]: Running in initrd. Jul 10 00:21:05.892141 systemd[1]: No hostname configured, using default hostname. Jul 10 00:21:05.892156 systemd[1]: Hostname set to . Jul 10 00:21:05.892167 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:21:05.892177 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:21:05.892188 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:05.892199 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:05.892211 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:21:05.892222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:21:05.892237 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:21:05.892249 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:21:05.892261 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:21:05.892275 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:21:05.892290 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:05.892301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:05.892311 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:21:05.892322 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:21:05.892333 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:21:05.892343 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:21:05.892357 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:21:05.892373 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:21:05.892387 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:21:05.892415 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:21:05.892431 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:05.892452 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:05.892467 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:05.892477 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:21:05.892494 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:21:05.892512 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:21:05.892522 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:21:05.892538 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:21:05.892549 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:21:05.892560 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:21:05.892570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:21:05.892582 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:05.892592 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:21:05.892699 systemd-journald[210]: Collecting audit messages is disabled. Jul 10 00:21:05.892729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:05.892746 systemd-journald[210]: Journal started Jul 10 00:21:05.892777 systemd-journald[210]: Runtime Journal (/run/log/journal/84682df54ec4499fa7379237a0b203c2) is 4.9M, max 39.5M, 34.6M free. Jul 10 00:21:05.897686 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:21:05.899930 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:21:05.903786 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:21:05.910295 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:21:05.916063 systemd-modules-load[212]: Inserted module 'overlay' Jul 10 00:21:05.934856 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:21:05.936493 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:21:05.948290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:21:05.954272 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:05.987781 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:21:05.987880 kernel: Bridge firewalling registered Jul 10 00:21:05.964691 systemd-modules-load[212]: Inserted module 'br_netfilter' Jul 10 00:21:05.988007 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:05.988756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:05.989320 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:05.993301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:21:05.994808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:21:06.020775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:06.026710 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:21:06.028255 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:21:06.031833 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:21:06.063450 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:21:06.081755 systemd-resolved[249]: Positive Trust Anchors: Jul 10 00:21:06.082343 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:21:06.082384 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:21:06.087434 systemd-resolved[249]: Defaulting to hostname 'linux'. Jul 10 00:21:06.089730 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:21:06.090131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:06.182694 kernel: SCSI subsystem initialized Jul 10 00:21:06.194689 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:21:06.207701 kernel: iscsi: registered transport (tcp) Jul 10 00:21:06.233826 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:21:06.233908 kernel: QLogic iSCSI HBA Driver Jul 10 00:21:06.260940 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:21:06.282368 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:06.283273 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:21:06.356288 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:21:06.359363 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:21:06.425688 kernel: raid6: avx2x4 gen() 14717 MB/s Jul 10 00:21:06.442690 kernel: raid6: avx2x2 gen() 14582 MB/s Jul 10 00:21:06.460133 kernel: raid6: avx2x1 gen() 11102 MB/s Jul 10 00:21:06.460216 kernel: raid6: using algorithm avx2x4 gen() 14717 MB/s Jul 10 00:21:06.477889 kernel: raid6: .... xor() 6054 MB/s, rmw enabled Jul 10 00:21:06.478001 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:21:06.500679 kernel: xor: automatically using best checksumming function avx Jul 10 00:21:06.695708 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:21:06.705672 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:21:06.708825 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:06.743999 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 10 00:21:06.754272 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:06.758806 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:21:06.787594 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 10 00:21:06.824815 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:21:06.826676 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:21:06.922707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:06.925559 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:21:07.001684 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jul 10 00:21:07.011366 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 10 00:21:07.013469 kernel: scsi host0: Virtio SCSI HBA Jul 10 00:21:07.017259 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 10 00:21:07.044901 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:21:07.044989 kernel: GPT:9289727 != 125829119 Jul 10 00:21:07.045008 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:21:07.045022 kernel: GPT:9289727 != 125829119 Jul 10 00:21:07.045033 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:21:07.045045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:21:07.063685 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:21:07.071665 kernel: libata version 3.00 loaded. Jul 10 00:21:07.078670 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 10 00:21:07.083666 kernel: scsi host1: ata_piix Jul 10 00:21:07.087662 kernel: AES CTR mode by8 optimization enabled Jul 10 00:21:07.089664 kernel: scsi host2: ata_piix Jul 10 00:21:07.122737 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 10 00:21:07.123000 kernel: ACPI: bus type USB registered Jul 10 00:21:07.123022 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jul 10 00:21:07.123171 kernel: usbcore: registered new interface driver usbfs Jul 10 00:21:07.123185 kernel: usbcore: registered new interface driver hub Jul 10 00:21:07.114003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:07.134672 kernel: usbcore: registered new device driver usb Jul 10 00:21:07.135381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:07.141733 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 00:21:07.141787 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jul 10 00:21:07.141812 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jul 10 00:21:07.136273 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:07.147222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:07.150501 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:07.215541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:07.352693 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 10 00:21:07.357679 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 10 00:21:07.361430 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:21:07.364316 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 10 00:21:07.364553 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 10 00:21:07.364724 kernel: hub 1-0:1.0: USB hub found Jul 10 00:21:07.364864 kernel: hub 1-0:1.0: 2 ports detected Jul 10 00:21:07.372504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:21:07.373012 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:21:07.384228 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:21:07.385056 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:21:07.396900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:21:07.397441 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:21:07.398138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:07.398856 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:21:07.400536 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:21:07.401837 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:21:07.424598 disk-uuid[616]: Primary Header is updated. Jul 10 00:21:07.424598 disk-uuid[616]: Secondary Entries is updated. Jul 10 00:21:07.424598 disk-uuid[616]: Secondary Header is updated. Jul 10 00:21:07.433762 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:21:07.432198 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:21:08.442113 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:21:08.442996 disk-uuid[620]: The operation has completed successfully. Jul 10 00:21:08.500305 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:21:08.501242 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:21:08.533536 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:21:08.553132 sh[635]: Success Jul 10 00:21:08.574015 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:21:08.574076 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:21:08.574091 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:21:08.585669 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 10 00:21:08.645861 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:21:08.651829 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:21:08.663561 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:21:08.680426 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:21:08.680515 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (647) Jul 10 00:21:08.682665 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:21:08.684765 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:21:08.684831 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:21:08.693788 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:21:08.695208 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:21:08.696292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:21:08.698113 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:21:08.699736 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:21:08.737072 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (679) Jul 10 00:21:08.737154 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:08.737169 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:21:08.738683 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:21:08.750697 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:08.751661 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:21:08.753710 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:21:08.863459 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:21:08.867046 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:21:08.932936 systemd-networkd[817]: lo: Link UP Jul 10 00:21:08.932947 systemd-networkd[817]: lo: Gained carrier Jul 10 00:21:08.936979 systemd-networkd[817]: Enumeration completed Jul 10 00:21:08.937571 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:21:08.937867 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 10 00:21:08.937872 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 10 00:21:08.938029 systemd[1]: Reached target network.target - Network. Jul 10 00:21:08.940911 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:08.940921 systemd-networkd[817]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:21:08.945044 systemd-networkd[817]: eth0: Link UP Jul 10 00:21:08.945051 systemd-networkd[817]: eth0: Gained carrier Jul 10 00:21:08.945073 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 10 00:21:08.948189 systemd-networkd[817]: eth1: Link UP Jul 10 00:21:08.948195 systemd-networkd[817]: eth1: Gained carrier Jul 10 00:21:08.948216 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:08.950018 ignition[727]: Ignition 2.21.0 Jul 10 00:21:08.950025 ignition[727]: Stage: fetch-offline Jul 10 00:21:08.951789 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:21:08.950062 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:08.950072 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:21:08.954814 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:21:08.950168 ignition[727]: parsed url from cmdline: "" Jul 10 00:21:08.950172 ignition[727]: no config URL provided Jul 10 00:21:08.950177 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:21:08.950185 ignition[727]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:21:08.950191 ignition[727]: failed to fetch config: resource requires networking Jul 10 00:21:08.950542 ignition[727]: Ignition finished successfully Jul 10 00:21:08.964781 systemd-networkd[817]: eth0: DHCPv4 address 143.110.236.9/20, gateway 143.110.224.1 acquired from 169.254.169.253 Jul 10 00:21:08.970780 systemd-networkd[817]: eth1: DHCPv4 address 10.124.0.11/20 acquired from 169.254.169.253 Jul 10 00:21:08.987970 ignition[827]: Ignition 2.21.0 Jul 10 00:21:08.988523 ignition[827]: Stage: fetch Jul 10 00:21:08.988727 ignition[827]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:08.988739 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:21:08.988855 ignition[827]: parsed url from cmdline: "" Jul 10 00:21:08.988859 ignition[827]: no config URL provided Jul 10 00:21:08.988865 ignition[827]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:21:08.988873 ignition[827]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:21:08.988920 ignition[827]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 10 00:21:09.004941 ignition[827]: GET result: OK Jul 10 00:21:09.005344 ignition[827]: parsing config with SHA512: 2d8989f75d63f5e722dd5f0fc4c6fa11d6e8e15ae6b09357c17395cdbe288514a1b8ab048efe9cbd43dbfa021f1cc5f293ac7e7ba4078256c7f3aebe2efd1506 Jul 10 00:21:09.013835 unknown[827]: fetched base config from "system" Jul 10 00:21:09.014746 ignition[827]: fetch: fetch complete Jul 10 00:21:09.013848 unknown[827]: fetched base config from "system" Jul 10 00:21:09.014753 ignition[827]: fetch: fetch passed Jul 10 00:21:09.013854 unknown[827]: fetched user config from "digitalocean" Jul 10 00:21:09.014816 ignition[827]: Ignition finished successfully Jul 10 00:21:09.018280 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:21:09.020148 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:21:09.056402 ignition[835]: Ignition 2.21.0 Jul 10 00:21:09.057079 ignition[835]: Stage: kargs Jul 10 00:21:09.057767 ignition[835]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:09.058131 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:21:09.059340 ignition[835]: kargs: kargs passed Jul 10 00:21:09.059394 ignition[835]: Ignition finished successfully Jul 10 00:21:09.060925 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:21:09.063315 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:21:09.100163 ignition[841]: Ignition 2.21.0 Jul 10 00:21:09.100178 ignition[841]: Stage: disks Jul 10 00:21:09.100350 ignition[841]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:09.100361 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:21:09.101160 ignition[841]: disks: disks passed Jul 10 00:21:09.102384 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:21:09.101213 ignition[841]: Ignition finished successfully Jul 10 00:21:09.103926 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:21:09.104329 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:21:09.105028 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:21:09.105828 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:21:09.106634 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:21:09.108905 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:21:09.149506 systemd-fsck[849]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:21:09.153582 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:21:09.155237 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:21:09.291655 kernel: EXT4-fs (vda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:21:09.292372 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:21:09.293323 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:21:09.295169 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:21:09.296807 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:21:09.301830 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jul 10 00:21:09.304754 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 00:21:09.307980 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:21:09.308082 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:21:09.315932 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:21:09.318792 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:21:09.333658 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (857) Jul 10 00:21:09.346005 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:09.346081 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:21:09.346095 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:21:09.358610 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:21:09.388172 coreos-metadata[859]: Jul 10 00:21:09.388 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 10 00:21:09.397669 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:21:09.399741 coreos-metadata[860]: Jul 10 00:21:09.398 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 10 00:21:09.405071 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:21:09.407106 coreos-metadata[859]: Jul 10 00:21:09.407 INFO Fetch successful Jul 10 00:21:09.408686 coreos-metadata[860]: Jul 10 00:21:09.408 INFO Fetch successful Jul 10 00:21:09.414764 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:21:09.416193 coreos-metadata[860]: Jul 10 00:21:09.415 INFO wrote hostname ci-4344.1.1-n-2654026dcf to /sysroot/etc/hostname Jul 10 00:21:09.417026 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:21:09.418100 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jul 10 00:21:09.418242 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jul 10 00:21:09.423958 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:21:09.528967 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:21:09.531003 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:21:09.532499 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:21:09.550681 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:09.565775 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:21:09.579620 ignition[980]: INFO : Ignition 2.21.0 Jul 10 00:21:09.579620 ignition[980]: INFO : Stage: mount Jul 10 00:21:09.582053 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:09.582053 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:21:09.583287 ignition[980]: INFO : mount: mount passed Jul 10 00:21:09.584456 ignition[980]: INFO : Ignition finished successfully Jul 10 00:21:09.585077 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:21:09.587096 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:21:09.680115 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:21:09.682961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:21:09.707696 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (992) Jul 10 00:21:09.707777 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:09.710182 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:21:09.710266 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:21:09.717029 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:21:09.762939 ignition[1008]: INFO : Ignition 2.21.0 Jul 10 00:21:09.762939 ignition[1008]: INFO : Stage: files Jul 10 00:21:09.764741 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:09.764741 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:21:09.768127 ignition[1008]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:21:09.770193 ignition[1008]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:21:09.770193 ignition[1008]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:21:09.775321 ignition[1008]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:21:09.776549 ignition[1008]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:21:09.778018 unknown[1008]: wrote ssh authorized keys file for user: core Jul 10 00:21:09.779069 ignition[1008]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:21:09.781324 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:21:09.782478 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 00:21:09.820047 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:21:09.979317 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:21:09.979317 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:21:09.979317 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:21:10.072842 systemd-networkd[817]: eth0: Gained IPv6LL Jul 10 00:21:10.465039 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:21:10.520821 systemd-networkd[817]: eth1: Gained IPv6LL Jul 10 00:21:10.658210 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:21:10.658210 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:21:10.664060 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 00:21:11.333837 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:21:11.782661 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:21:11.782661 ignition[1008]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:21:11.784686 ignition[1008]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:21:11.786564 ignition[1008]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:21:11.786564 ignition[1008]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:21:11.786564 ignition[1008]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:21:11.788414 ignition[1008]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:21:11.788414 ignition[1008]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:21:11.788414 ignition[1008]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:21:11.788414 ignition[1008]: INFO : files: files passed Jul 10 00:21:11.788414 ignition[1008]: INFO : Ignition finished successfully Jul 10 00:21:11.789717 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:21:11.793035 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:21:11.794795 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:21:11.810823 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:21:11.810993 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:21:11.821139 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:11.821964 initrd-setup-root-after-ignition[1039]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:11.822911 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:11.825350 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:21:11.826224 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:21:11.828257 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:21:11.883443 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:21:11.883710 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:21:11.884965 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:21:11.885353 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:21:11.886062 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:21:11.886925 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:21:11.917434 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:21:11.919340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:21:11.941301 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:11.942324 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:11.942766 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:21:11.943597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:21:11.943819 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:21:11.944758 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:21:11.945167 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:21:11.945911 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:21:11.946629 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:21:11.947388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:21:11.948208 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:21:11.948937 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:21:11.949596 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:21:11.950793 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:21:11.951378 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:21:11.952117 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:21:11.952726 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:21:11.952931 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:21:11.953695 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:11.954164 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:11.954736 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:21:11.954859 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:11.955443 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:21:11.955627 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:21:11.956440 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:21:11.956569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:21:11.957420 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:21:11.957615 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:21:11.958157 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 00:21:11.958294 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:21:11.959767 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:21:11.961249 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:21:11.961434 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:11.964873 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:21:11.965293 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:21:11.965471 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:11.971341 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:21:11.971493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:21:11.978332 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:21:11.978451 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:21:11.996441 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:21:12.001010 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:21:12.001710 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:21:12.002924 ignition[1063]: INFO : Ignition 2.21.0 Jul 10 00:21:12.002924 ignition[1063]: INFO : Stage: umount Jul 10 00:21:12.005532 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:12.005532 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:21:12.005532 ignition[1063]: INFO : umount: umount passed Jul 10 00:21:12.005532 ignition[1063]: INFO : Ignition finished successfully Jul 10 00:21:12.007327 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:21:12.007443 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:21:12.008328 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:21:12.008381 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:21:12.009082 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:21:12.009128 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:21:12.009624 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:21:12.009709 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:21:12.010290 systemd[1]: Stopped target network.target - Network. Jul 10 00:21:12.010818 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:21:12.010869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:21:12.011463 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:21:12.012060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:21:12.012188 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:12.012718 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:21:12.013483 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:21:12.014481 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:21:12.014541 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:21:12.015067 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:21:12.015107 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:21:12.015729 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:21:12.015839 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:21:12.016433 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:21:12.016473 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:21:12.017019 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:21:12.017076 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:21:12.017753 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:21:12.018320 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:21:12.025701 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:21:12.025824 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:21:12.029318 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:21:12.030747 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:21:12.030921 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:21:12.032985 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:21:12.033466 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:21:12.034140 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:21:12.034186 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:12.035729 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:21:12.036122 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:21:12.036174 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:21:12.036561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:21:12.036598 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:12.037835 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:21:12.037884 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:12.038228 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:21:12.038269 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:12.040842 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:12.045823 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:21:12.045931 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:12.059043 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:21:12.059201 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:12.060603 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:21:12.060726 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:12.061144 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:21:12.061177 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:12.062252 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:21:12.062303 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:21:12.063344 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:21:12.063411 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:21:12.064394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:21:12.064469 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:21:12.066145 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:21:12.067897 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:21:12.067972 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:12.068774 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:21:12.068838 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:12.070738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:12.070789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:12.074474 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:21:12.074556 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:21:12.074599 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:12.075030 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:21:12.076192 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:21:12.084057 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:21:12.084201 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:21:12.085231 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:21:12.087848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:21:12.105973 systemd[1]: Switching root. Jul 10 00:21:12.152609 systemd-journald[210]: Journal stopped Jul 10 00:21:13.428898 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Jul 10 00:21:13.428983 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:21:13.429004 kernel: SELinux: policy capability open_perms=1 Jul 10 00:21:13.429019 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:21:13.429030 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:21:13.429042 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:21:13.429058 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:21:13.429073 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:21:13.429084 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:21:13.429096 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:21:13.429107 kernel: audit: type=1403 audit(1752106872.333:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:21:13.429120 systemd[1]: Successfully loaded SELinux policy in 53.054ms. Jul 10 00:21:13.429145 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.377ms. Jul 10 00:21:13.429159 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:21:13.429173 systemd[1]: Detected virtualization kvm. Jul 10 00:21:13.429185 systemd[1]: Detected architecture x86-64. Jul 10 00:21:13.429197 systemd[1]: Detected first boot. Jul 10 00:21:13.429210 systemd[1]: Hostname set to . Jul 10 00:21:13.429222 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:21:13.429235 zram_generator::config[1108]: No configuration found. Jul 10 00:21:13.429251 kernel: Guest personality initialized and is inactive Jul 10 00:21:13.429264 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:21:13.429276 kernel: Initialized host personality Jul 10 00:21:13.429287 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:21:13.429299 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:21:13.429312 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:21:13.429325 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:21:13.429337 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:21:13.429349 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:21:13.429365 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:21:13.429378 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:21:13.429390 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:21:13.429407 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:21:13.429419 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:21:13.429432 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:21:13.429445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:21:13.429458 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:21:13.429472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:13.429484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:13.429497 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:21:13.429511 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:21:13.429523 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:21:13.429535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:21:13.429550 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:21:13.429562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:13.429574 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:13.429586 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:21:13.429598 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:21:13.429611 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:21:13.429623 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:21:13.429635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:13.437542 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:21:13.437593 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:21:13.437615 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:21:13.437634 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:21:13.437702 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:21:13.437723 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:21:13.437744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:13.437763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:13.437781 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:13.437800 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:21:13.437819 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:21:13.437843 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:21:13.437862 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:21:13.437879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:13.437896 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:21:13.437913 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:21:13.437932 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:21:13.443237 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:21:13.443267 systemd[1]: Reached target machines.target - Containers. Jul 10 00:21:13.443288 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:21:13.443301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:13.443315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:21:13.443334 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:21:13.443346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:13.443359 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:21:13.443372 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:13.443384 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:21:13.443396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:13.443412 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:21:13.443424 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:21:13.443437 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:21:13.443449 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:21:13.443461 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:21:13.443474 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:13.443487 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:21:13.443502 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:21:13.443515 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:21:13.443528 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:21:13.443541 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:21:13.443554 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:21:13.443566 kernel: fuse: init (API version 7.41) Jul 10 00:21:13.443586 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:21:13.443603 systemd[1]: Stopped verity-setup.service. Jul 10 00:21:13.443620 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:13.443633 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:21:13.444592 kernel: ACPI: bus type drm_connector registered Jul 10 00:21:13.444624 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:21:13.445672 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:21:13.445706 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:21:13.445719 kernel: loop: module loaded Jul 10 00:21:13.445732 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:21:13.445745 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:21:13.445758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:13.445771 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:21:13.445787 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:21:13.445813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:13.445831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:13.445849 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:21:13.445867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:21:13.445883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:13.445897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:13.445909 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:21:13.445921 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:21:13.445934 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:13.445950 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:13.445963 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:21:13.445975 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:21:13.445988 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:21:13.446004 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:21:13.446018 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:21:13.446030 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:21:13.446043 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:21:13.446055 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:13.446071 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:21:13.446084 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:13.446096 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:21:13.446109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:13.446121 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:21:13.446133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:13.446145 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:21:13.446158 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:21:13.446191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:21:13.446204 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:13.446217 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:21:13.446229 kernel: loop0: detected capacity change from 0 to 221472 Jul 10 00:21:13.446285 systemd-journald[1178]: Collecting audit messages is disabled. Jul 10 00:21:13.446311 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:21:13.446325 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:21:13.446337 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:21:13.446354 systemd-journald[1178]: Journal started Jul 10 00:21:13.446378 systemd-journald[1178]: Runtime Journal (/run/log/journal/84682df54ec4499fa7379237a0b203c2) is 4.9M, max 39.5M, 34.6M free. Jul 10 00:21:12.989273 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:21:13.465189 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:21:13.465255 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:21:13.465282 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:21:13.013510 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:21:13.014149 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:21:13.474982 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:21:13.493823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:13.503716 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:21:13.508810 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:13.522031 kernel: loop1: detected capacity change from 0 to 8 Jul 10 00:21:13.530354 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:21:13.532192 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:21:13.541969 systemd-journald[1178]: Time spent on flushing to /var/log/journal/84682df54ec4499fa7379237a0b203c2 is 30.231ms for 1020 entries. Jul 10 00:21:13.541969 systemd-journald[1178]: System Journal (/var/log/journal/84682df54ec4499fa7379237a0b203c2) is 8M, max 195.6M, 187.6M free. Jul 10 00:21:13.579629 kernel: loop2: detected capacity change from 0 to 113872 Jul 10 00:21:13.579698 systemd-journald[1178]: Received client request to flush runtime journal. Jul 10 00:21:13.579744 kernel: loop3: detected capacity change from 0 to 146240 Jul 10 00:21:13.581795 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:21:13.631673 kernel: loop4: detected capacity change from 0 to 221472 Jul 10 00:21:13.646347 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:21:13.649952 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:21:13.684720 kernel: loop5: detected capacity change from 0 to 8 Jul 10 00:21:13.694669 kernel: loop6: detected capacity change from 0 to 113872 Jul 10 00:21:13.733027 kernel: loop7: detected capacity change from 0 to 146240 Jul 10 00:21:13.738810 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jul 10 00:21:13.738837 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jul 10 00:21:13.746698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:13.770850 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 10 00:21:13.771427 (sd-merge)[1252]: Merged extensions into '/usr'. Jul 10 00:21:13.775957 systemd[1]: Reload requested from client PID 1205 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:21:13.775977 systemd[1]: Reloading... Jul 10 00:21:13.923701 zram_generator::config[1278]: No configuration found. Jul 10 00:21:14.096425 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:21:14.173091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:14.276878 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:21:14.277530 systemd[1]: Reloading finished in 501 ms. Jul 10 00:21:14.304752 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:21:14.305599 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:21:14.318870 systemd[1]: Starting ensure-sysext.service... Jul 10 00:21:14.320936 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:21:14.364315 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:21:14.364332 systemd[1]: Reloading... Jul 10 00:21:14.413497 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:21:14.413538 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:21:14.413915 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:21:14.414190 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:21:14.416917 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:21:14.417283 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jul 10 00:21:14.417344 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jul 10 00:21:14.429765 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:21:14.429786 systemd-tmpfiles[1326]: Skipping /boot Jul 10 00:21:14.459442 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:21:14.459457 systemd-tmpfiles[1326]: Skipping /boot Jul 10 00:21:14.509669 zram_generator::config[1353]: No configuration found. Jul 10 00:21:14.628846 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:14.721120 systemd[1]: Reloading finished in 356 ms. Jul 10 00:21:14.747022 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:21:14.754715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:14.761817 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:14.765565 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:21:14.769699 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:21:14.776615 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:21:14.779586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:14.782974 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:21:14.792546 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:14.793496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:14.798953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:14.801921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:14.813955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:14.814945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:14.815368 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:14.815502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:14.820264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:14.821410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:14.827976 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:14.828212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:14.834936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:14.835496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:14.835616 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:14.839726 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:21:14.840136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:14.850117 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:21:14.851215 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:14.852724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:14.857999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:14.858399 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:14.868027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:14.868271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:14.872431 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:14.875963 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:21:14.883272 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Jul 10 00:21:14.886747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:14.887338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:14.887459 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:14.887594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:14.889719 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:21:14.892684 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:21:14.900920 systemd[1]: Finished ensure-sysext.service. Jul 10 00:21:14.902457 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:21:14.917611 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:21:14.924951 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:21:14.925535 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:14.927516 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:21:14.928565 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:21:14.929233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:14.929396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:14.944263 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:21:14.946307 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:14.946520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:14.948790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:14.961799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:14.962046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:14.965138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:14.999393 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:21:15.017649 augenrules[1470]: No rules Jul 10 00:21:15.019270 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:15.019736 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:15.062826 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jul 10 00:21:15.066076 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 10 00:21:15.066456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:15.066600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:15.068878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:15.082966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:15.091421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:15.092017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:15.092063 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:15.092104 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:21:15.092120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:15.139185 kernel: ISO 9660 Extensions: RRIP_1991A Jul 10 00:21:15.138811 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:21:15.142466 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:21:15.144944 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 10 00:21:15.152211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:15.152609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:15.175809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:15.180408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:15.182019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:15.183872 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:15.184321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:15.191266 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:15.274285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:21:15.279092 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:21:15.281671 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:21:15.322721 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 00:21:15.326222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:21:15.329687 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:21:15.374671 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 10 00:21:15.384681 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:21:15.487786 systemd-resolved[1401]: Positive Trust Anchors: Jul 10 00:21:15.487808 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:21:15.487857 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:21:15.495190 systemd-resolved[1401]: Using system hostname 'ci-4344.1.1-n-2654026dcf'. Jul 10 00:21:15.497285 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:21:15.497926 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:15.519099 systemd-networkd[1450]: lo: Link UP Jul 10 00:21:15.519108 systemd-networkd[1450]: lo: Gained carrier Jul 10 00:21:15.525142 systemd-networkd[1450]: Enumeration completed Jul 10 00:21:15.526779 systemd-networkd[1450]: eth0: Configuring with /run/systemd/network/10-a2:2d:db:6b:9b:5f.network. Jul 10 00:21:15.527347 systemd-networkd[1450]: eth1: Configuring with /run/systemd/network/10-52:46:6e:c3:7b:3d.network. Jul 10 00:21:15.528397 systemd-networkd[1450]: eth0: Link UP Jul 10 00:21:15.528544 systemd-networkd[1450]: eth0: Gained carrier Jul 10 00:21:15.531957 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:21:15.532525 systemd[1]: Reached target network.target - Network. Jul 10 00:21:15.534380 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:21:15.536041 systemd-networkd[1450]: eth1: Link UP Jul 10 00:21:15.536943 systemd-networkd[1450]: eth1: Gained carrier Jul 10 00:21:15.539253 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:21:15.602678 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 10 00:21:15.620590 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:21:15.621393 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:21:15.622527 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:21:15.623174 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:21:15.623576 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:21:15.623959 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:21:15.624313 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:21:15.624676 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:21:15.624730 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:21:15.625081 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:21:15.625950 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:21:15.626791 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:21:15.627534 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:21:15.629936 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:21:15.633303 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:21:15.638668 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 10 00:21:15.639379 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:21:15.647922 systemd-timesyncd[1429]: Contacted time server 45.77.126.122:123 (0.flatcar.pool.ntp.org). Jul 10 00:21:15.647995 systemd-timesyncd[1429]: Initial clock synchronization to Thu 2025-07-10 00:21:15.994597 UTC. Jul 10 00:21:15.659844 kernel: Console: switching to colour dummy device 80x25 Jul 10 00:21:15.659914 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 10 00:21:15.659930 kernel: [drm] features: -context_init Jul 10 00:21:15.659778 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:21:15.660088 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:21:15.662667 kernel: [drm] number of scanouts: 1 Jul 10 00:21:15.662731 kernel: [drm] number of cap sets: 0 Jul 10 00:21:15.665663 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 10 00:21:15.671158 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:21:15.671967 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:21:15.673793 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:21:15.678555 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:21:15.678785 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:21:15.678969 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:21:15.679062 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:21:15.680975 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:21:15.686108 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:21:15.691316 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:21:15.695392 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:21:15.699039 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:21:15.713003 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:21:15.713124 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:21:15.714775 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:21:15.717967 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:21:15.725097 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:21:15.733976 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:21:15.737678 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:21:15.740276 jq[1532]: false Jul 10 00:21:15.748126 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:21:15.750498 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:21:15.751329 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:21:15.759108 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:21:15.762125 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing passwd entry cache Jul 10 00:21:15.764779 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:21:15.771360 oslogin_cache_refresh[1534]: Refreshing passwd entry cache Jul 10 00:21:15.777601 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:21:15.778312 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:21:15.778574 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:21:15.793674 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting users, quitting Jul 10 00:21:15.793674 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:21:15.793674 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing group entry cache Jul 10 00:21:15.792121 oslogin_cache_refresh[1534]: Failure getting users, quitting Jul 10 00:21:15.792148 oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:21:15.792216 oslogin_cache_refresh[1534]: Refreshing group entry cache Jul 10 00:21:15.798863 oslogin_cache_refresh[1534]: Failure getting groups, quitting Jul 10 00:21:15.801771 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:21:15.806967 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting groups, quitting Jul 10 00:21:15.806967 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:21:15.798879 oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:21:15.802188 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:21:15.812143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:15.814983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:21:15.815342 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:21:15.845716 jq[1543]: true Jul 10 00:21:15.848721 extend-filesystems[1533]: Found /dev/vda6 Jul 10 00:21:15.862752 extend-filesystems[1533]: Found /dev/vda9 Jul 10 00:21:15.871683 coreos-metadata[1528]: Jul 10 00:21:15.867 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 10 00:21:15.886377 extend-filesystems[1533]: Checking size of /dev/vda9 Jul 10 00:21:15.893176 dbus-daemon[1530]: [system] SELinux support is enabled Jul 10 00:21:15.895318 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:21:15.896801 coreos-metadata[1528]: Jul 10 00:21:15.895 INFO Fetch successful Jul 10 00:21:15.900230 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:21:15.901517 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:21:15.906052 update_engine[1542]: I20250710 00:21:15.903296 1542 main.cc:92] Flatcar Update Engine starting Jul 10 00:21:15.904203 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:21:15.904239 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:21:15.904393 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:21:15.904473 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 10 00:21:15.904490 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:21:15.912032 tar[1549]: linux-amd64/helm Jul 10 00:21:15.914126 jq[1561]: true Jul 10 00:21:15.922127 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:21:15.927617 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:21:15.933211 update_engine[1542]: I20250710 00:21:15.930284 1542 update_check_scheduler.cc:74] Next update check in 11m43s Jul 10 00:21:15.939871 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:21:15.964041 extend-filesystems[1533]: Resized partition /dev/vda9 Jul 10 00:21:15.985672 extend-filesystems[1584]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:21:16.018722 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 10 00:21:16.076746 bash[1600]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:21:16.079419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:21:16.086061 systemd[1]: Starting sshkeys.service... Jul 10 00:21:16.111328 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:21:16.111804 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:21:16.161209 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 00:21:16.166334 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 00:21:16.183076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:16.183527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:16.228712 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 10 00:21:16.239235 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:16.275585 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:21:16.275585 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 10 00:21:16.275585 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 10 00:21:16.275864 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Jul 10 00:21:16.277400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:16.278579 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:21:16.280109 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:21:16.325035 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:21:16.410221 coreos-metadata[1607]: Jul 10 00:21:16.406 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 10 00:21:16.427178 systemd-logind[1541]: New seat seat0. Jul 10 00:21:16.430975 coreos-metadata[1607]: Jul 10 00:21:16.430 INFO Fetch successful Jul 10 00:21:16.452362 systemd-logind[1541]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:21:16.452391 systemd-logind[1541]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:21:16.452711 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:21:16.487775 unknown[1607]: wrote ssh authorized keys file for user: core Jul 10 00:21:16.520233 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:21:16.528858 update-ssh-keys[1627]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:21:16.531317 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 00:21:16.535910 systemd[1]: Finished sshkeys.service. Jul 10 00:21:16.594405 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:16.655599 containerd[1565]: time="2025-07-10T00:21:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:21:16.659237 containerd[1565]: time="2025-07-10T00:21:16.659185527Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:21:16.700814 containerd[1565]: time="2025-07-10T00:21:16.699744283Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.943µs" Jul 10 00:21:16.702146 containerd[1565]: time="2025-07-10T00:21:16.702100817Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:21:16.702273 containerd[1565]: time="2025-07-10T00:21:16.702259973Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:21:16.702588 containerd[1565]: time="2025-07-10T00:21:16.702560841Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:21:16.703362 containerd[1565]: time="2025-07-10T00:21:16.703338818Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:21:16.703459 containerd[1565]: time="2025-07-10T00:21:16.703445577Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:21:16.703694 containerd[1565]: time="2025-07-10T00:21:16.703653231Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:21:16.704131 containerd[1565]: time="2025-07-10T00:21:16.704109783Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:21:16.704587 containerd[1565]: time="2025-07-10T00:21:16.704561522Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:21:16.705484 containerd[1565]: time="2025-07-10T00:21:16.705453018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:21:16.705643 containerd[1565]: time="2025-07-10T00:21:16.705623546Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:21:16.706039 containerd[1565]: time="2025-07-10T00:21:16.706021834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:21:16.706217 containerd[1565]: time="2025-07-10T00:21:16.706198003Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:21:16.707741 containerd[1565]: time="2025-07-10T00:21:16.707216989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:21:16.708584 containerd[1565]: time="2025-07-10T00:21:16.708559843Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:21:16.709045 containerd[1565]: time="2025-07-10T00:21:16.709024149Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:21:16.709212 containerd[1565]: time="2025-07-10T00:21:16.709175350Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:21:16.710352 containerd[1565]: time="2025-07-10T00:21:16.709651133Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:21:16.710352 containerd[1565]: time="2025-07-10T00:21:16.709771882Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:21:16.715043 containerd[1565]: time="2025-07-10T00:21:16.714984493Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:21:16.715425 containerd[1565]: time="2025-07-10T00:21:16.715394467Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:21:16.715702 containerd[1565]: time="2025-07-10T00:21:16.715685875Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:21:16.715777 containerd[1565]: time="2025-07-10T00:21:16.715765401Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:21:16.716298 containerd[1565]: time="2025-07-10T00:21:16.715821019Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:21:16.716298 containerd[1565]: time="2025-07-10T00:21:16.716237485Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:21:16.716298 containerd[1565]: time="2025-07-10T00:21:16.716251473Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:21:16.716298 containerd[1565]: time="2025-07-10T00:21:16.716263289Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:21:16.716298 containerd[1565]: time="2025-07-10T00:21:16.716276944Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:21:16.716458 containerd[1565]: time="2025-07-10T00:21:16.716445684Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:21:16.716661 containerd[1565]: time="2025-07-10T00:21:16.716499353Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:21:16.716661 containerd[1565]: time="2025-07-10T00:21:16.716525683Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:21:16.717230 containerd[1565]: time="2025-07-10T00:21:16.716919467Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:21:16.717361 containerd[1565]: time="2025-07-10T00:21:16.717345054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:21:16.717443 containerd[1565]: time="2025-07-10T00:21:16.717431044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:21:16.717863 containerd[1565]: time="2025-07-10T00:21:16.717633821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:21:16.717954 containerd[1565]: time="2025-07-10T00:21:16.717940357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:21:16.718087 containerd[1565]: time="2025-07-10T00:21:16.717995391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:21:16.718273 containerd[1565]: time="2025-07-10T00:21:16.718257387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:21:16.718597 containerd[1565]: time="2025-07-10T00:21:16.718475237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:21:16.718597 containerd[1565]: time="2025-07-10T00:21:16.718498295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:21:16.718597 containerd[1565]: time="2025-07-10T00:21:16.718510307Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:21:16.718971 containerd[1565]: time="2025-07-10T00:21:16.718906981Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:21:16.719359 containerd[1565]: time="2025-07-10T00:21:16.719145145Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:21:16.719435 containerd[1565]: time="2025-07-10T00:21:16.719423289Z" level=info msg="Start snapshots syncer" Jul 10 00:21:16.719589 containerd[1565]: time="2025-07-10T00:21:16.719573862Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:21:16.721336 containerd[1565]: time="2025-07-10T00:21:16.720831159Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:21:16.721336 containerd[1565]: time="2025-07-10T00:21:16.721260560Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:21:16.722038 containerd[1565]: time="2025-07-10T00:21:16.721933397Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:21:16.722807 containerd[1565]: time="2025-07-10T00:21:16.722765879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:21:16.723215 containerd[1565]: time="2025-07-10T00:21:16.722976916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:21:16.723324 containerd[1565]: time="2025-07-10T00:21:16.723305699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:21:16.723395 containerd[1565]: time="2025-07-10T00:21:16.723383768Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:21:16.723531 containerd[1565]: time="2025-07-10T00:21:16.723437009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:21:16.723888 containerd[1565]: time="2025-07-10T00:21:16.723585554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:21:16.723888 containerd[1565]: time="2025-07-10T00:21:16.723826130Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:21:16.723888 containerd[1565]: time="2025-07-10T00:21:16.723857424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:21:16.723888 containerd[1565]: time="2025-07-10T00:21:16.723870115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:21:16.724115 containerd[1565]: time="2025-07-10T00:21:16.724098964Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725515883Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725558386Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725571279Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725587161Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725596422Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725605880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725616451Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725647355Z" level=info msg="runtime interface created" Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725654764Z" level=info msg="created NRI interface" Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725664122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725691549Z" level=info msg="Connect containerd service" Jul 10 00:21:16.726387 containerd[1565]: time="2025-07-10T00:21:16.725728122Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:21:16.729055 containerd[1565]: time="2025-07-10T00:21:16.729018396Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:21:16.740089 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:21:16.793187 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:21:16.797175 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:21:16.820747 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:21:16.821061 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:21:16.823913 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:21:16.872642 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:21:16.876049 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:21:16.881106 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:21:16.882011 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:21:16.959889 containerd[1565]: time="2025-07-10T00:21:16.959819420Z" level=info msg="Start subscribing containerd event" Jul 10 00:21:16.960059 containerd[1565]: time="2025-07-10T00:21:16.959895995Z" level=info msg="Start recovering state" Jul 10 00:21:16.960059 containerd[1565]: time="2025-07-10T00:21:16.960026546Z" level=info msg="Start event monitor" Jul 10 00:21:16.960059 containerd[1565]: time="2025-07-10T00:21:16.960043857Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:21:16.960059 containerd[1565]: time="2025-07-10T00:21:16.960055508Z" level=info msg="Start streaming server" Jul 10 00:21:16.960156 containerd[1565]: time="2025-07-10T00:21:16.960068094Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:21:16.960156 containerd[1565]: time="2025-07-10T00:21:16.960078491Z" level=info msg="runtime interface starting up..." Jul 10 00:21:16.960156 containerd[1565]: time="2025-07-10T00:21:16.960087439Z" level=info msg="starting plugins..." Jul 10 00:21:16.960156 containerd[1565]: time="2025-07-10T00:21:16.960106691Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:21:16.961140 containerd[1565]: time="2025-07-10T00:21:16.961045231Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:21:16.961282 containerd[1565]: time="2025-07-10T00:21:16.961246453Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:21:16.961502 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:21:16.962888 containerd[1565]: time="2025-07-10T00:21:16.961710197Z" level=info msg="containerd successfully booted in 0.307258s" Jul 10 00:21:17.081919 tar[1549]: linux-amd64/LICENSE Jul 10 00:21:17.081919 tar[1549]: linux-amd64/README.md Jul 10 00:21:17.100889 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:21:17.177910 systemd-networkd[1450]: eth0: Gained IPv6LL Jul 10 00:21:17.182052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:21:17.183843 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:21:17.186781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:17.188206 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:21:17.226175 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:21:17.240937 systemd-networkd[1450]: eth1: Gained IPv6LL Jul 10 00:21:18.358424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:18.359789 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:21:18.360882 systemd[1]: Startup finished in 3.328s (kernel) + 6.691s (initrd) + 6.078s (userspace) = 16.098s. Jul 10 00:21:18.367277 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:19.034235 kubelet[1685]: E0710 00:21:19.034161 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:19.037186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:19.037405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:19.038028 systemd[1]: kubelet.service: Consumed 1.259s CPU time, 264.3M memory peak. Jul 10 00:21:19.673165 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:21:19.674875 systemd[1]: Started sshd@0-143.110.236.9:22-147.75.109.163:38220.service - OpenSSH per-connection server daemon (147.75.109.163:38220). Jul 10 00:21:19.769698 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 38220 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:19.772506 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:19.788316 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:21:19.790535 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:21:19.793381 systemd-logind[1541]: New session 1 of user core. Jul 10 00:21:19.826838 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:21:19.831067 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:21:19.868050 (systemd)[1702]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:21:19.871812 systemd-logind[1541]: New session c1 of user core. Jul 10 00:21:20.045271 systemd[1702]: Queued start job for default target default.target. Jul 10 00:21:20.060349 systemd[1702]: Created slice app.slice - User Application Slice. Jul 10 00:21:20.060395 systemd[1702]: Reached target paths.target - Paths. Jul 10 00:21:20.060448 systemd[1702]: Reached target timers.target - Timers. Jul 10 00:21:20.061956 systemd[1702]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:21:20.079532 systemd[1702]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:21:20.079712 systemd[1702]: Reached target sockets.target - Sockets. Jul 10 00:21:20.079779 systemd[1702]: Reached target basic.target - Basic System. Jul 10 00:21:20.079833 systemd[1702]: Reached target default.target - Main User Target. Jul 10 00:21:20.079875 systemd[1702]: Startup finished in 197ms. Jul 10 00:21:20.080098 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:21:20.091991 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:21:20.162993 systemd[1]: Started sshd@1-143.110.236.9:22-147.75.109.163:38232.service - OpenSSH per-connection server daemon (147.75.109.163:38232). Jul 10 00:21:20.239586 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 38232 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:20.240653 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:20.247132 systemd-logind[1541]: New session 2 of user core. Jul 10 00:21:20.256028 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:21:20.323144 sshd[1715]: Connection closed by 147.75.109.163 port 38232 Jul 10 00:21:20.322952 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:20.334393 systemd[1]: sshd@1-143.110.236.9:22-147.75.109.163:38232.service: Deactivated successfully. Jul 10 00:21:20.336488 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:21:20.337751 systemd-logind[1541]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:21:20.340860 systemd[1]: Started sshd@2-143.110.236.9:22-147.75.109.163:38238.service - OpenSSH per-connection server daemon (147.75.109.163:38238). Jul 10 00:21:20.342983 systemd-logind[1541]: Removed session 2. Jul 10 00:21:20.409521 sshd[1721]: Accepted publickey for core from 147.75.109.163 port 38238 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:20.411549 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:20.417951 systemd-logind[1541]: New session 3 of user core. Jul 10 00:21:20.425994 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:21:20.485183 sshd[1723]: Connection closed by 147.75.109.163 port 38238 Jul 10 00:21:20.486103 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:20.497688 systemd[1]: sshd@2-143.110.236.9:22-147.75.109.163:38238.service: Deactivated successfully. Jul 10 00:21:20.499888 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:21:20.503186 systemd-logind[1541]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:21:20.505059 systemd[1]: Started sshd@3-143.110.236.9:22-147.75.109.163:38248.service - OpenSSH per-connection server daemon (147.75.109.163:38248). Jul 10 00:21:20.507115 systemd-logind[1541]: Removed session 3. Jul 10 00:21:20.570321 sshd[1729]: Accepted publickey for core from 147.75.109.163 port 38248 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:20.572021 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:20.579925 systemd-logind[1541]: New session 4 of user core. Jul 10 00:21:20.599151 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:21:20.663856 sshd[1731]: Connection closed by 147.75.109.163 port 38248 Jul 10 00:21:20.664648 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:20.676001 systemd[1]: sshd@3-143.110.236.9:22-147.75.109.163:38248.service: Deactivated successfully. Jul 10 00:21:20.678605 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:21:20.679922 systemd-logind[1541]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:21:20.684135 systemd[1]: Started sshd@4-143.110.236.9:22-147.75.109.163:38264.service - OpenSSH per-connection server daemon (147.75.109.163:38264). Jul 10 00:21:20.685812 systemd-logind[1541]: Removed session 4. Jul 10 00:21:20.753475 sshd[1737]: Accepted publickey for core from 147.75.109.163 port 38264 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:20.755341 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:20.761129 systemd-logind[1541]: New session 5 of user core. Jul 10 00:21:20.769993 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:21:20.846152 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:21:20.846621 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:20.869976 sudo[1740]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:20.874703 sshd[1739]: Connection closed by 147.75.109.163 port 38264 Jul 10 00:21:20.874299 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:20.886155 systemd[1]: sshd@4-143.110.236.9:22-147.75.109.163:38264.service: Deactivated successfully. Jul 10 00:21:20.888505 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:21:20.889392 systemd-logind[1541]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:21:20.894082 systemd[1]: Started sshd@5-143.110.236.9:22-147.75.109.163:38280.service - OpenSSH per-connection server daemon (147.75.109.163:38280). Jul 10 00:21:20.895559 systemd-logind[1541]: Removed session 5. Jul 10 00:21:20.975689 sshd[1746]: Accepted publickey for core from 147.75.109.163 port 38280 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:20.977561 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:20.983693 systemd-logind[1541]: New session 6 of user core. Jul 10 00:21:20.995066 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:21:21.056637 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:21:21.057037 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:21.063901 sudo[1750]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:21.072284 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:21:21.072648 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:21.085826 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:21.143816 augenrules[1772]: No rules Jul 10 00:21:21.145240 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:21.145578 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:21.146745 sudo[1749]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:21.150081 sshd[1748]: Connection closed by 147.75.109.163 port 38280 Jul 10 00:21:21.150897 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:21.161296 systemd[1]: sshd@5-143.110.236.9:22-147.75.109.163:38280.service: Deactivated successfully. Jul 10 00:21:21.163649 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:21:21.164751 systemd-logind[1541]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:21:21.169017 systemd[1]: Started sshd@6-143.110.236.9:22-147.75.109.163:38292.service - OpenSSH per-connection server daemon (147.75.109.163:38292). Jul 10 00:21:21.170411 systemd-logind[1541]: Removed session 6. Jul 10 00:21:21.249577 sshd[1781]: Accepted publickey for core from 147.75.109.163 port 38292 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:21.252197 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:21.258103 systemd-logind[1541]: New session 7 of user core. Jul 10 00:21:21.266951 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:21:21.330873 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:21:21.331315 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:21.839173 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:21:21.865515 (dockerd)[1802]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:21:22.279550 dockerd[1802]: time="2025-07-10T00:21:22.278631246Z" level=info msg="Starting up" Jul 10 00:21:22.281345 dockerd[1802]: time="2025-07-10T00:21:22.281293239Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:21:22.326732 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1474054797-merged.mount: Deactivated successfully. Jul 10 00:21:22.351129 dockerd[1802]: time="2025-07-10T00:21:22.350894748Z" level=info msg="Loading containers: start." Jul 10 00:21:22.361720 kernel: Initializing XFRM netlink socket Jul 10 00:21:22.664914 systemd-networkd[1450]: docker0: Link UP Jul 10 00:21:22.668719 dockerd[1802]: time="2025-07-10T00:21:22.668623637Z" level=info msg="Loading containers: done." Jul 10 00:21:22.685331 dockerd[1802]: time="2025-07-10T00:21:22.685270008Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:21:22.685507 dockerd[1802]: time="2025-07-10T00:21:22.685378952Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:21:22.685550 dockerd[1802]: time="2025-07-10T00:21:22.685537557Z" level=info msg="Initializing buildkit" Jul 10 00:21:22.713908 dockerd[1802]: time="2025-07-10T00:21:22.713840735Z" level=info msg="Completed buildkit initialization" Jul 10 00:21:22.721846 dockerd[1802]: time="2025-07-10T00:21:22.721738310Z" level=info msg="Daemon has completed initialization" Jul 10 00:21:22.722533 dockerd[1802]: time="2025-07-10T00:21:22.721855978Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:21:22.722293 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:21:23.627054 containerd[1565]: time="2025-07-10T00:21:23.626640002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:21:24.205813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097049796.mount: Deactivated successfully. Jul 10 00:21:25.375667 containerd[1565]: time="2025-07-10T00:21:25.375567229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:25.377385 containerd[1565]: time="2025-07-10T00:21:25.377091259Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 10 00:21:25.378177 containerd[1565]: time="2025-07-10T00:21:25.378129807Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:25.381365 containerd[1565]: time="2025-07-10T00:21:25.381311891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:25.382600 containerd[1565]: time="2025-07-10T00:21:25.382561383Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.755856233s" Jul 10 00:21:25.383178 containerd[1565]: time="2025-07-10T00:21:25.382759141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 00:21:25.383781 containerd[1565]: time="2025-07-10T00:21:25.383748969Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:21:26.923769 containerd[1565]: time="2025-07-10T00:21:26.923668857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:26.925062 containerd[1565]: time="2025-07-10T00:21:26.925015680Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 10 00:21:26.926295 containerd[1565]: time="2025-07-10T00:21:26.926161327Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:26.931551 containerd[1565]: time="2025-07-10T00:21:26.931447388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:26.932743 containerd[1565]: time="2025-07-10T00:21:26.932638330Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.548802068s" Jul 10 00:21:26.932743 containerd[1565]: time="2025-07-10T00:21:26.932696302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 00:21:26.933429 containerd[1565]: time="2025-07-10T00:21:26.933328637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:21:28.182834 containerd[1565]: time="2025-07-10T00:21:28.182735725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:28.184251 containerd[1565]: time="2025-07-10T00:21:28.183948926Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 10 00:21:28.184991 containerd[1565]: time="2025-07-10T00:21:28.184962598Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:28.187524 containerd[1565]: time="2025-07-10T00:21:28.187488298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:28.188878 containerd[1565]: time="2025-07-10T00:21:28.188839788Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.255473385s" Jul 10 00:21:28.189116 containerd[1565]: time="2025-07-10T00:21:28.189013272Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 00:21:28.189618 containerd[1565]: time="2025-07-10T00:21:28.189584471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:21:29.070317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:21:29.073928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:29.294603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:29.306698 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:29.383770 kubelet[2088]: E0710 00:21:29.383580 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:29.391418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130706604.mount: Deactivated successfully. Jul 10 00:21:29.395616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:29.395890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:29.397043 systemd[1]: kubelet.service: Consumed 251ms CPU time, 110.6M memory peak. Jul 10 00:21:29.933440 containerd[1565]: time="2025-07-10T00:21:29.933367967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.934718 containerd[1565]: time="2025-07-10T00:21:29.934655349Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 10 00:21:29.935290 containerd[1565]: time="2025-07-10T00:21:29.935244644Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.937884 containerd[1565]: time="2025-07-10T00:21:29.937828207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.939241 containerd[1565]: time="2025-07-10T00:21:29.939145411Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.749525141s" Jul 10 00:21:29.939241 containerd[1565]: time="2025-07-10T00:21:29.939206260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 00:21:29.939941 containerd[1565]: time="2025-07-10T00:21:29.939861351Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:21:29.942142 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jul 10 00:21:30.426668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661283003.mount: Deactivated successfully. Jul 10 00:21:31.486602 containerd[1565]: time="2025-07-10T00:21:31.486505082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:31.488300 containerd[1565]: time="2025-07-10T00:21:31.488249592Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 10 00:21:31.489798 containerd[1565]: time="2025-07-10T00:21:31.488419530Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:31.491895 containerd[1565]: time="2025-07-10T00:21:31.491810539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:31.493291 containerd[1565]: time="2025-07-10T00:21:31.492951059Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.553044069s" Jul 10 00:21:31.493291 containerd[1565]: time="2025-07-10T00:21:31.492996085Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:21:31.493586 containerd[1565]: time="2025-07-10T00:21:31.493538954Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:21:31.943857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510340327.mount: Deactivated successfully. Jul 10 00:21:31.949733 containerd[1565]: time="2025-07-10T00:21:31.949267416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:31.950886 containerd[1565]: time="2025-07-10T00:21:31.950834825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:21:31.951715 containerd[1565]: time="2025-07-10T00:21:31.951672159Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:31.954915 containerd[1565]: time="2025-07-10T00:21:31.954817838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:31.955578 containerd[1565]: time="2025-07-10T00:21:31.955530954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 461.955202ms" Jul 10 00:21:31.955796 containerd[1565]: time="2025-07-10T00:21:31.955760955Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:21:31.956487 containerd[1565]: time="2025-07-10T00:21:31.956448297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:21:32.452387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363219353.mount: Deactivated successfully. Jul 10 00:21:33.048910 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jul 10 00:21:34.670866 containerd[1565]: time="2025-07-10T00:21:34.670795132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:34.672483 containerd[1565]: time="2025-07-10T00:21:34.672438062Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:34.672640 containerd[1565]: time="2025-07-10T00:21:34.672623589Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 10 00:21:34.676440 containerd[1565]: time="2025-07-10T00:21:34.676342068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:34.678447 containerd[1565]: time="2025-07-10T00:21:34.677984735Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.721399956s" Jul 10 00:21:34.678447 containerd[1565]: time="2025-07-10T00:21:34.678048397Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 00:21:37.359426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:37.360266 systemd[1]: kubelet.service: Consumed 251ms CPU time, 110.6M memory peak. Jul 10 00:21:37.363255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:37.398835 systemd[1]: Reload requested from client PID 2237 ('systemctl') (unit session-7.scope)... Jul 10 00:21:37.398859 systemd[1]: Reloading... Jul 10 00:21:37.555690 zram_generator::config[2283]: No configuration found. Jul 10 00:21:37.662914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:37.801836 systemd[1]: Reloading finished in 402 ms. Jul 10 00:21:37.862635 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:21:37.862735 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:21:37.863134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:37.863191 systemd[1]: kubelet.service: Consumed 122ms CPU time, 98.2M memory peak. Jul 10 00:21:37.865553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:38.042832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:38.050238 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:21:38.108023 kubelet[2334]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:38.108572 kubelet[2334]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:21:38.108912 kubelet[2334]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:38.109046 kubelet[2334]: I0710 00:21:38.108996 2334 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:21:39.025668 kubelet[2334]: I0710 00:21:39.025245 2334 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:21:39.025668 kubelet[2334]: I0710 00:21:39.025288 2334 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:21:39.025668 kubelet[2334]: I0710 00:21:39.025593 2334 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:21:39.050306 kubelet[2334]: I0710 00:21:39.050270 2334 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:21:39.057062 kubelet[2334]: E0710 00:21:39.056976 2334 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.110.236.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.110.236.9:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:39.072306 kubelet[2334]: I0710 00:21:39.072265 2334 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:21:39.079008 kubelet[2334]: I0710 00:21:39.078955 2334 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:21:39.080225 kubelet[2334]: I0710 00:21:39.080125 2334 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:21:39.080430 kubelet[2334]: I0710 00:21:39.080327 2334 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:21:39.080621 kubelet[2334]: I0710 00:21:39.080395 2334 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-2654026dcf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:21:39.080621 kubelet[2334]: I0710 00:21:39.080622 2334 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:21:39.080827 kubelet[2334]: I0710 00:21:39.080633 2334 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:21:39.080827 kubelet[2334]: I0710 00:21:39.080765 2334 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:39.085287 kubelet[2334]: I0710 00:21:39.084918 2334 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:21:39.085287 kubelet[2334]: I0710 00:21:39.084979 2334 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:21:39.085287 kubelet[2334]: I0710 00:21:39.085022 2334 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:21:39.085287 kubelet[2334]: I0710 00:21:39.085054 2334 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:21:39.087149 kubelet[2334]: W0710 00:21:39.086760 2334 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.236.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-2654026dcf&limit=500&resourceVersion=0": dial tcp 143.110.236.9:6443: connect: connection refused Jul 10 00:21:39.087149 kubelet[2334]: E0710 00:21:39.086966 2334 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.110.236.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-2654026dcf&limit=500&resourceVersion=0\": dial tcp 143.110.236.9:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:39.088471 kubelet[2334]: W0710 00:21:39.088280 2334 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.236.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.110.236.9:6443: connect: connection refused Jul 10 00:21:39.088471 kubelet[2334]: E0710 00:21:39.088332 2334 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.110.236.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.236.9:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:39.089257 kubelet[2334]: I0710 00:21:39.089079 2334 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:21:39.092708 kubelet[2334]: I0710 00:21:39.092673 2334 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:21:39.093331 kubelet[2334]: W0710 00:21:39.092953 2334 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:21:39.094579 kubelet[2334]: I0710 00:21:39.094553 2334 server.go:1274] "Started kubelet" Jul 10 00:21:39.096494 kubelet[2334]: I0710 00:21:39.096434 2334 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:21:39.097596 kubelet[2334]: I0710 00:21:39.097571 2334 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:21:39.099871 kubelet[2334]: I0710 00:21:39.099120 2334 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:21:39.099871 kubelet[2334]: I0710 00:21:39.099439 2334 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:21:39.101483 kubelet[2334]: E0710 00:21:39.099776 2334 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.236.9:6443/api/v1/namespaces/default/events\": dial tcp 143.110.236.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-2654026dcf.1850bbf1110a27cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-2654026dcf,UID:ci-4344.1.1-n-2654026dcf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-2654026dcf,},FirstTimestamp:2025-07-10 00:21:39.094513611 +0000 UTC m=+1.039417154,LastTimestamp:2025-07-10 00:21:39.094513611 +0000 UTC m=+1.039417154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-2654026dcf,}" Jul 10 00:21:39.102186 kubelet[2334]: I0710 00:21:39.102155 2334 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:21:39.104167 kubelet[2334]: I0710 00:21:39.104138 2334 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:21:39.115524 kubelet[2334]: I0710 00:21:39.115492 2334 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:21:39.116551 kubelet[2334]: E0710 00:21:39.116511 2334 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-2654026dcf\" not found" Jul 10 00:21:39.121593 kubelet[2334]: E0710 00:21:39.121535 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.236.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-2654026dcf?timeout=10s\": dial tcp 143.110.236.9:6443: connect: connection refused" interval="200ms" Jul 10 00:21:39.122778 kubelet[2334]: I0710 00:21:39.121997 2334 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:21:39.122778 kubelet[2334]: I0710 00:21:39.122189 2334 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:21:39.123858 kubelet[2334]: I0710 00:21:39.123823 2334 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:21:39.124108 kubelet[2334]: I0710 00:21:39.124094 2334 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:21:39.128051 kubelet[2334]: I0710 00:21:39.127982 2334 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:21:39.145140 kubelet[2334]: E0710 00:21:39.145079 2334 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:21:39.151020 kubelet[2334]: I0710 00:21:39.150865 2334 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:21:39.152718 kubelet[2334]: I0710 00:21:39.152582 2334 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:21:39.152718 kubelet[2334]: I0710 00:21:39.152623 2334 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:21:39.152718 kubelet[2334]: I0710 00:21:39.152671 2334 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:21:39.152958 kubelet[2334]: E0710 00:21:39.152806 2334 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:21:39.153226 kubelet[2334]: W0710 00:21:39.153160 2334 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.236.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.236.9:6443: connect: connection refused Jul 10 00:21:39.153315 kubelet[2334]: E0710 00:21:39.153232 2334 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.110.236.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.236.9:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:39.164862 kubelet[2334]: W0710 00:21:39.164790 2334 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.236.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.236.9:6443: connect: connection refused Jul 10 00:21:39.165729 kubelet[2334]: E0710 00:21:39.165104 2334 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.110.236.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.236.9:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:39.171750 kubelet[2334]: I0710 00:21:39.171668 2334 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:21:39.171750 kubelet[2334]: I0710 00:21:39.171697 2334 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:21:39.172051 kubelet[2334]: I0710 00:21:39.171726 2334 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:39.174343 kubelet[2334]: I0710 00:21:39.174311 2334 policy_none.go:49] "None policy: Start" Jul 10 00:21:39.175700 kubelet[2334]: I0710 00:21:39.175560 2334 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:21:39.175700 kubelet[2334]: I0710 00:21:39.175599 2334 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:21:39.183383 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:21:39.196090 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:21:39.201130 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:21:39.218690 kubelet[2334]: E0710 00:21:39.217625 2334 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-2654026dcf\" not found" Jul 10 00:21:39.220510 kubelet[2334]: I0710 00:21:39.220427 2334 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:21:39.221150 kubelet[2334]: I0710 00:21:39.221006 2334 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:21:39.221342 kubelet[2334]: I0710 00:21:39.221283 2334 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:21:39.222633 kubelet[2334]: I0710 00:21:39.221903 2334 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:21:39.226273 kubelet[2334]: E0710 00:21:39.226227 2334 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-2654026dcf\" not found" Jul 10 00:21:39.264696 systemd[1]: Created slice kubepods-burstable-podd2939461bb7bdd6fcdd50e77b8791de8.slice - libcontainer container kubepods-burstable-podd2939461bb7bdd6fcdd50e77b8791de8.slice. Jul 10 00:21:39.288111 systemd[1]: Created slice kubepods-burstable-pod241a617b648ad64ca86816b29e6244d8.slice - libcontainer container kubepods-burstable-pod241a617b648ad64ca86816b29e6244d8.slice. Jul 10 00:21:39.306180 systemd[1]: Created slice kubepods-burstable-poddd541de92c9f5f6d1d4e65cd96ea4fde.slice - libcontainer container kubepods-burstable-poddd541de92c9f5f6d1d4e65cd96ea4fde.slice. Jul 10 00:21:39.322989 kubelet[2334]: E0710 00:21:39.322919 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.236.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-2654026dcf?timeout=10s\": dial tcp 143.110.236.9:6443: connect: connection refused" interval="400ms" Jul 10 00:21:39.323236 kubelet[2334]: I0710 00:21:39.323190 2334 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.323951 kubelet[2334]: E0710 00:21:39.323898 2334 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.110.236.9:6443/api/v1/nodes\": dial tcp 143.110.236.9:6443: connect: connection refused" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426021 kubelet[2334]: I0710 00:21:39.425930 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2939461bb7bdd6fcdd50e77b8791de8-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" (UID: \"d2939461bb7bdd6fcdd50e77b8791de8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426021 kubelet[2334]: I0710 00:21:39.425985 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426021 kubelet[2334]: I0710 00:21:39.426008 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426021 kubelet[2334]: I0710 00:21:39.426024 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426021 kubelet[2334]: I0710 00:21:39.426043 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd541de92c9f5f6d1d4e65cd96ea4fde-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-2654026dcf\" (UID: \"dd541de92c9f5f6d1d4e65cd96ea4fde\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426416 kubelet[2334]: I0710 00:21:39.426059 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2939461bb7bdd6fcdd50e77b8791de8-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" (UID: \"d2939461bb7bdd6fcdd50e77b8791de8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426416 kubelet[2334]: I0710 00:21:39.426075 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2939461bb7bdd6fcdd50e77b8791de8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" (UID: \"d2939461bb7bdd6fcdd50e77b8791de8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426416 kubelet[2334]: I0710 00:21:39.426091 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.426416 kubelet[2334]: I0710 00:21:39.426115 2334 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.525819 kubelet[2334]: I0710 00:21:39.525592 2334 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.526113 kubelet[2334]: E0710 00:21:39.526082 2334 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.110.236.9:6443/api/v1/nodes\": dial tcp 143.110.236.9:6443: connect: connection refused" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.585697 kubelet[2334]: E0710 00:21:39.585568 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:39.588177 containerd[1565]: time="2025-07-10T00:21:39.588120445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-2654026dcf,Uid:d2939461bb7bdd6fcdd50e77b8791de8,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:39.603718 kubelet[2334]: E0710 00:21:39.602581 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:39.611808 kubelet[2334]: E0710 00:21:39.610902 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:39.614405 containerd[1565]: time="2025-07-10T00:21:39.614011059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-2654026dcf,Uid:241a617b648ad64ca86816b29e6244d8,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:39.615029 containerd[1565]: time="2025-07-10T00:21:39.614906885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-2654026dcf,Uid:dd541de92c9f5f6d1d4e65cd96ea4fde,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:39.722401 containerd[1565]: time="2025-07-10T00:21:39.722320021Z" level=info msg="connecting to shim 12de97de658294f85f03f0e62e703df92203da056731afcff39de8cbb9a8761f" address="unix:///run/containerd/s/0604935cb3f5b1fb6ade8a90991d23d8abe86e57ff3b701e7f8b913cec3e9810" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:39.722718 containerd[1565]: time="2025-07-10T00:21:39.722356325Z" level=info msg="connecting to shim be2b191c9b387f4293849b90e07d70118e19c14b60b12a886c7a47eecb25c201" address="unix:///run/containerd/s/32e93df78ae33900ef0e44241ed06656e3f12ece4e4c7d07a8644401ed9961df" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:39.725742 kubelet[2334]: E0710 00:21:39.724791 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.236.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-2654026dcf?timeout=10s\": dial tcp 143.110.236.9:6443: connect: connection refused" interval="800ms" Jul 10 00:21:39.726882 containerd[1565]: time="2025-07-10T00:21:39.726841625Z" level=info msg="connecting to shim df0a161a3f58cf427f45d87907c1e77ddee3de6c5e50e13598d5242bb29f2c3c" address="unix:///run/containerd/s/b1d728fe18b6e1f408eeeb8eebc92913a3025c028b009ebb5de6b70c498b8697" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:39.848971 systemd[1]: Started cri-containerd-12de97de658294f85f03f0e62e703df92203da056731afcff39de8cbb9a8761f.scope - libcontainer container 12de97de658294f85f03f0e62e703df92203da056731afcff39de8cbb9a8761f. Jul 10 00:21:39.851946 systemd[1]: Started cri-containerd-df0a161a3f58cf427f45d87907c1e77ddee3de6c5e50e13598d5242bb29f2c3c.scope - libcontainer container df0a161a3f58cf427f45d87907c1e77ddee3de6c5e50e13598d5242bb29f2c3c. Jul 10 00:21:39.859735 systemd[1]: Started cri-containerd-be2b191c9b387f4293849b90e07d70118e19c14b60b12a886c7a47eecb25c201.scope - libcontainer container be2b191c9b387f4293849b90e07d70118e19c14b60b12a886c7a47eecb25c201. Jul 10 00:21:39.930064 kubelet[2334]: I0710 00:21:39.929965 2334 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.931194 kubelet[2334]: E0710 00:21:39.931140 2334 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.110.236.9:6443/api/v1/nodes\": dial tcp 143.110.236.9:6443: connect: connection refused" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:39.959579 containerd[1565]: time="2025-07-10T00:21:39.959516273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-2654026dcf,Uid:241a617b648ad64ca86816b29e6244d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"df0a161a3f58cf427f45d87907c1e77ddee3de6c5e50e13598d5242bb29f2c3c\"" Jul 10 00:21:39.963362 kubelet[2334]: E0710 00:21:39.963325 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:39.970961 containerd[1565]: time="2025-07-10T00:21:39.970912228Z" level=info msg="CreateContainer within sandbox \"df0a161a3f58cf427f45d87907c1e77ddee3de6c5e50e13598d5242bb29f2c3c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:21:39.978917 containerd[1565]: time="2025-07-10T00:21:39.978844692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-2654026dcf,Uid:d2939461bb7bdd6fcdd50e77b8791de8,Namespace:kube-system,Attempt:0,} returns sandbox id \"be2b191c9b387f4293849b90e07d70118e19c14b60b12a886c7a47eecb25c201\"" Jul 10 00:21:39.980948 kubelet[2334]: E0710 00:21:39.980914 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:39.991026 containerd[1565]: time="2025-07-10T00:21:39.990957964Z" level=info msg="CreateContainer within sandbox \"be2b191c9b387f4293849b90e07d70118e19c14b60b12a886c7a47eecb25c201\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:21:40.005388 containerd[1565]: time="2025-07-10T00:21:40.005341632Z" level=info msg="Container 652a7af426ffb95ceaba738c43ea6926354ac7452a6e4c8457877562ccb33d94: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:40.007506 containerd[1565]: time="2025-07-10T00:21:40.007457050Z" level=info msg="Container 310da268ba7e014681dc8437187351b68ee2ea1dbaa4ec7bab74b5176d84e9f6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:40.011200 containerd[1565]: time="2025-07-10T00:21:40.011155374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-2654026dcf,Uid:dd541de92c9f5f6d1d4e65cd96ea4fde,Namespace:kube-system,Attempt:0,} returns sandbox id \"12de97de658294f85f03f0e62e703df92203da056731afcff39de8cbb9a8761f\"" Jul 10 00:21:40.012605 kubelet[2334]: E0710 00:21:40.012562 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:40.016237 containerd[1565]: time="2025-07-10T00:21:40.016161931Z" level=info msg="CreateContainer within sandbox \"df0a161a3f58cf427f45d87907c1e77ddee3de6c5e50e13598d5242bb29f2c3c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"652a7af426ffb95ceaba738c43ea6926354ac7452a6e4c8457877562ccb33d94\"" Jul 10 00:21:40.019820 containerd[1565]: time="2025-07-10T00:21:40.018489828Z" level=info msg="StartContainer for \"652a7af426ffb95ceaba738c43ea6926354ac7452a6e4c8457877562ccb33d94\"" Jul 10 00:21:40.019820 containerd[1565]: time="2025-07-10T00:21:40.018838230Z" level=info msg="CreateContainer within sandbox \"12de97de658294f85f03f0e62e703df92203da056731afcff39de8cbb9a8761f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:21:40.020869 containerd[1565]: time="2025-07-10T00:21:40.020790173Z" level=info msg="connecting to shim 652a7af426ffb95ceaba738c43ea6926354ac7452a6e4c8457877562ccb33d94" address="unix:///run/containerd/s/b1d728fe18b6e1f408eeeb8eebc92913a3025c028b009ebb5de6b70c498b8697" protocol=ttrpc version=3 Jul 10 00:21:40.026034 containerd[1565]: time="2025-07-10T00:21:40.025920403Z" level=info msg="CreateContainer within sandbox \"be2b191c9b387f4293849b90e07d70118e19c14b60b12a886c7a47eecb25c201\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"310da268ba7e014681dc8437187351b68ee2ea1dbaa4ec7bab74b5176d84e9f6\"" Jul 10 00:21:40.028203 containerd[1565]: time="2025-07-10T00:21:40.027405297Z" level=info msg="StartContainer for \"310da268ba7e014681dc8437187351b68ee2ea1dbaa4ec7bab74b5176d84e9f6\"" Jul 10 00:21:40.032451 containerd[1565]: time="2025-07-10T00:21:40.032393396Z" level=info msg="connecting to shim 310da268ba7e014681dc8437187351b68ee2ea1dbaa4ec7bab74b5176d84e9f6" address="unix:///run/containerd/s/32e93df78ae33900ef0e44241ed06656e3f12ece4e4c7d07a8644401ed9961df" protocol=ttrpc version=3 Jul 10 00:21:40.037885 containerd[1565]: time="2025-07-10T00:21:40.037832833Z" level=info msg="Container 4c4555c3fa1812971bb9192631dfe226e600fd74cdc450670c190488113c3e21: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:40.051923 systemd[1]: Started cri-containerd-652a7af426ffb95ceaba738c43ea6926354ac7452a6e4c8457877562ccb33d94.scope - libcontainer container 652a7af426ffb95ceaba738c43ea6926354ac7452a6e4c8457877562ccb33d94. Jul 10 00:21:40.060602 containerd[1565]: time="2025-07-10T00:21:40.059521250Z" level=info msg="CreateContainer within sandbox \"12de97de658294f85f03f0e62e703df92203da056731afcff39de8cbb9a8761f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c4555c3fa1812971bb9192631dfe226e600fd74cdc450670c190488113c3e21\"" Jul 10 00:21:40.062147 containerd[1565]: time="2025-07-10T00:21:40.061999763Z" level=info msg="StartContainer for \"4c4555c3fa1812971bb9192631dfe226e600fd74cdc450670c190488113c3e21\"" Jul 10 00:21:40.075685 containerd[1565]: time="2025-07-10T00:21:40.074020267Z" level=info msg="connecting to shim 4c4555c3fa1812971bb9192631dfe226e600fd74cdc450670c190488113c3e21" address="unix:///run/containerd/s/0604935cb3f5b1fb6ade8a90991d23d8abe86e57ff3b701e7f8b913cec3e9810" protocol=ttrpc version=3 Jul 10 00:21:40.089928 systemd[1]: Started cri-containerd-310da268ba7e014681dc8437187351b68ee2ea1dbaa4ec7bab74b5176d84e9f6.scope - libcontainer container 310da268ba7e014681dc8437187351b68ee2ea1dbaa4ec7bab74b5176d84e9f6. Jul 10 00:21:40.117907 systemd[1]: Started cri-containerd-4c4555c3fa1812971bb9192631dfe226e600fd74cdc450670c190488113c3e21.scope - libcontainer container 4c4555c3fa1812971bb9192631dfe226e600fd74cdc450670c190488113c3e21. Jul 10 00:21:40.200240 containerd[1565]: time="2025-07-10T00:21:40.200191287Z" level=info msg="StartContainer for \"652a7af426ffb95ceaba738c43ea6926354ac7452a6e4c8457877562ccb33d94\" returns successfully" Jul 10 00:21:40.242384 kubelet[2334]: W0710 00:21:40.242258 2334 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.236.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.236.9:6443: connect: connection refused Jul 10 00:21:40.242384 kubelet[2334]: E0710 00:21:40.242315 2334 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.110.236.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.236.9:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:40.259927 containerd[1565]: time="2025-07-10T00:21:40.259783357Z" level=info msg="StartContainer for \"310da268ba7e014681dc8437187351b68ee2ea1dbaa4ec7bab74b5176d84e9f6\" returns successfully" Jul 10 00:21:40.266116 kubelet[2334]: W0710 00:21:40.265839 2334 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.236.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.236.9:6443: connect: connection refused Jul 10 00:21:40.266116 kubelet[2334]: E0710 00:21:40.265904 2334 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.110.236.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.236.9:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:40.280934 kubelet[2334]: W0710 00:21:40.279894 2334 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.236.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.110.236.9:6443: connect: connection refused Jul 10 00:21:40.280934 kubelet[2334]: E0710 00:21:40.280879 2334 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.110.236.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.236.9:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:40.282732 containerd[1565]: time="2025-07-10T00:21:40.282662484Z" level=info msg="StartContainer for \"4c4555c3fa1812971bb9192631dfe226e600fd74cdc450670c190488113c3e21\" returns successfully" Jul 10 00:21:40.733393 kubelet[2334]: I0710 00:21:40.733022 2334 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:41.210474 kubelet[2334]: E0710 00:21:41.210309 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:41.219202 kubelet[2334]: E0710 00:21:41.214504 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:41.220939 kubelet[2334]: E0710 00:21:41.220816 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:42.224719 kubelet[2334]: E0710 00:21:42.224356 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:42.228669 kubelet[2334]: E0710 00:21:42.227750 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:42.228669 kubelet[2334]: E0710 00:21:42.228105 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:42.859075 kubelet[2334]: E0710 00:21:42.859022 2334 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-2654026dcf\" not found" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:42.991530 kubelet[2334]: I0710 00:21:42.991435 2334 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:43.011403 kubelet[2334]: E0710 00:21:43.011291 2334 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4344.1.1-n-2654026dcf.1850bbf1110a27cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-2654026dcf,UID:ci-4344.1.1-n-2654026dcf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-2654026dcf,},FirstTimestamp:2025-07-10 00:21:39.094513611 +0000 UTC m=+1.039417154,LastTimestamp:2025-07-10 00:21:39.094513611 +0000 UTC m=+1.039417154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-2654026dcf,}" Jul 10 00:21:43.090527 kubelet[2334]: I0710 00:21:43.090466 2334 apiserver.go:52] "Watching apiserver" Jul 10 00:21:43.124372 kubelet[2334]: I0710 00:21:43.124235 2334 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:21:43.234300 kubelet[2334]: E0710 00:21:43.234253 2334 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:43.235006 kubelet[2334]: E0710 00:21:43.234672 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:44.204378 kubelet[2334]: W0710 00:21:44.204320 2334 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:21:44.205635 kubelet[2334]: E0710 00:21:44.205567 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:44.228099 kubelet[2334]: E0710 00:21:44.228051 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:44.287097 kubelet[2334]: W0710 00:21:44.287020 2334 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:21:44.288775 kubelet[2334]: E0710 00:21:44.288728 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:45.017989 systemd[1]: Reload requested from client PID 2606 ('systemctl') (unit session-7.scope)... Jul 10 00:21:45.018411 systemd[1]: Reloading... Jul 10 00:21:45.154789 zram_generator::config[2649]: No configuration found. Jul 10 00:21:45.231748 kubelet[2334]: E0710 00:21:45.231080 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:45.297831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:45.448482 systemd[1]: Reloading finished in 429 ms. Jul 10 00:21:45.482433 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:45.490408 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:21:45.490717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:45.490788 systemd[1]: kubelet.service: Consumed 1.545s CPU time, 125M memory peak. Jul 10 00:21:45.494454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:45.675504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:45.687296 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:21:45.756524 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:45.756524 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:21:45.756524 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:45.757186 kubelet[2700]: I0710 00:21:45.756571 2700 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:21:45.771695 kubelet[2700]: I0710 00:21:45.768779 2700 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:21:45.771695 kubelet[2700]: I0710 00:21:45.769876 2700 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:21:45.771695 kubelet[2700]: I0710 00:21:45.770305 2700 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:21:45.772427 kubelet[2700]: I0710 00:21:45.772390 2700 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:21:45.775207 kubelet[2700]: I0710 00:21:45.775165 2700 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:21:45.788595 kubelet[2700]: I0710 00:21:45.788556 2700 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:21:45.794059 kubelet[2700]: I0710 00:21:45.793994 2700 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:21:45.794317 kubelet[2700]: I0710 00:21:45.794207 2700 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:21:45.794547 kubelet[2700]: I0710 00:21:45.794386 2700 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:21:45.794959 kubelet[2700]: I0710 00:21:45.794553 2700 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-2654026dcf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:21:45.794959 kubelet[2700]: I0710 00:21:45.794952 2700 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:21:45.794959 kubelet[2700]: I0710 00:21:45.794969 2700 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:21:45.795197 kubelet[2700]: I0710 00:21:45.795017 2700 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:45.795197 kubelet[2700]: I0710 00:21:45.795186 2700 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:21:45.795718 kubelet[2700]: I0710 00:21:45.795207 2700 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:21:45.795718 kubelet[2700]: I0710 00:21:45.795247 2700 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:21:45.795718 kubelet[2700]: I0710 00:21:45.795262 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:21:45.803605 kubelet[2700]: I0710 00:21:45.802800 2700 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:21:45.804326 kubelet[2700]: I0710 00:21:45.804303 2700 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:21:45.805017 kubelet[2700]: I0710 00:21:45.804992 2700 server.go:1274] "Started kubelet" Jul 10 00:21:45.810348 kubelet[2700]: I0710 00:21:45.810278 2700 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:21:45.813000 kubelet[2700]: I0710 00:21:45.812970 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:21:45.816738 kubelet[2700]: I0710 00:21:45.816433 2700 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:21:45.820997 kubelet[2700]: I0710 00:21:45.813072 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:21:45.821202 kubelet[2700]: I0710 00:21:45.821180 2700 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:21:45.822870 kubelet[2700]: I0710 00:21:45.813035 2700 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:21:45.824123 kubelet[2700]: I0710 00:21:45.824087 2700 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:21:45.824343 kubelet[2700]: I0710 00:21:45.824244 2700 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:21:45.824436 kubelet[2700]: I0710 00:21:45.824422 2700 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:21:45.827349 kubelet[2700]: I0710 00:21:45.826622 2700 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:21:45.827349 kubelet[2700]: I0710 00:21:45.826786 2700 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:21:45.829832 kubelet[2700]: E0710 00:21:45.829774 2700 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:21:45.833241 kubelet[2700]: I0710 00:21:45.833193 2700 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:21:45.858108 kubelet[2700]: I0710 00:21:45.858038 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:21:45.859899 kubelet[2700]: I0710 00:21:45.859818 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:21:45.859899 kubelet[2700]: I0710 00:21:45.859858 2700 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:21:45.859899 kubelet[2700]: I0710 00:21:45.859884 2700 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:21:45.860082 kubelet[2700]: E0710 00:21:45.859938 2700 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:21:45.908950 kubelet[2700]: I0710 00:21:45.908909 2700 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:21:45.908950 kubelet[2700]: I0710 00:21:45.908933 2700 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:21:45.908950 kubelet[2700]: I0710 00:21:45.908961 2700 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:45.909170 kubelet[2700]: I0710 00:21:45.909153 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:21:45.909200 kubelet[2700]: I0710 00:21:45.909171 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:21:45.909200 kubelet[2700]: I0710 00:21:45.909190 2700 policy_none.go:49] "None policy: Start" Jul 10 00:21:45.910201 kubelet[2700]: I0710 00:21:45.910169 2700 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:21:45.910362 kubelet[2700]: I0710 00:21:45.910210 2700 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:21:45.910425 kubelet[2700]: I0710 00:21:45.910400 2700 state_mem.go:75] "Updated machine memory state" Jul 10 00:21:45.917723 kubelet[2700]: I0710 00:21:45.917250 2700 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:21:45.917723 kubelet[2700]: I0710 00:21:45.917576 2700 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:21:45.917723 kubelet[2700]: I0710 00:21:45.917602 2700 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:21:45.919707 kubelet[2700]: I0710 00:21:45.918387 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:21:45.971147 kubelet[2700]: W0710 00:21:45.969882 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:21:45.975287 kubelet[2700]: W0710 00:21:45.975237 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:21:45.975287 kubelet[2700]: W0710 00:21:45.975300 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:21:45.975492 kubelet[2700]: E0710 00:21:45.975362 2700 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:45.975532 kubelet[2700]: E0710 00:21:45.975498 2700 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.027683 kubelet[2700]: I0710 00:21:46.026984 2700 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031630 kubelet[2700]: I0710 00:21:46.029183 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2939461bb7bdd6fcdd50e77b8791de8-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" (UID: \"d2939461bb7bdd6fcdd50e77b8791de8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031630 kubelet[2700]: I0710 00:21:46.029245 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031630 kubelet[2700]: I0710 00:21:46.029271 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031630 kubelet[2700]: I0710 00:21:46.029289 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031630 kubelet[2700]: I0710 00:21:46.029311 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031909 kubelet[2700]: I0710 00:21:46.029359 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd541de92c9f5f6d1d4e65cd96ea4fde-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-2654026dcf\" (UID: \"dd541de92c9f5f6d1d4e65cd96ea4fde\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031909 kubelet[2700]: I0710 00:21:46.029376 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2939461bb7bdd6fcdd50e77b8791de8-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" (UID: \"d2939461bb7bdd6fcdd50e77b8791de8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031909 kubelet[2700]: I0710 00:21:46.029394 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2939461bb7bdd6fcdd50e77b8791de8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" (UID: \"d2939461bb7bdd6fcdd50e77b8791de8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.031909 kubelet[2700]: I0710 00:21:46.029415 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/241a617b648ad64ca86816b29e6244d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-2654026dcf\" (UID: \"241a617b648ad64ca86816b29e6244d8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.052098 sudo[2733]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:21:46.053106 sudo[2733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:21:46.072126 kubelet[2700]: I0710 00:21:46.071882 2700 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.072126 kubelet[2700]: I0710 00:21:46.071989 2700 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.272136 kubelet[2700]: E0710 00:21:46.271274 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:46.277577 kubelet[2700]: E0710 00:21:46.277417 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:46.277577 kubelet[2700]: E0710 00:21:46.277425 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:46.797734 kubelet[2700]: I0710 00:21:46.797521 2700 apiserver.go:52] "Watching apiserver" Jul 10 00:21:46.807919 sudo[2733]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:46.825359 kubelet[2700]: I0710 00:21:46.825288 2700 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:21:46.894153 kubelet[2700]: E0710 00:21:46.893600 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:46.894421 kubelet[2700]: E0710 00:21:46.894400 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:46.925007 kubelet[2700]: W0710 00:21:46.924961 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:21:46.925367 kubelet[2700]: E0710 00:21:46.925345 2700 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-2654026dcf\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" Jul 10 00:21:46.925844 kubelet[2700]: E0710 00:21:46.925825 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:46.968369 kubelet[2700]: I0710 00:21:46.968219 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-2654026dcf" podStartSLOduration=1.9681982470000001 podStartE2EDuration="1.968198247s" podCreationTimestamp="2025-07-10 00:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:46.949337382 +0000 UTC m=+1.253703859" watchObservedRunningTime="2025-07-10 00:21:46.968198247 +0000 UTC m=+1.272564727" Jul 10 00:21:46.982815 kubelet[2700]: I0710 00:21:46.982757 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-2654026dcf" podStartSLOduration=2.9827233509999997 podStartE2EDuration="2.982723351s" podCreationTimestamp="2025-07-10 00:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:46.982537051 +0000 UTC m=+1.286903531" watchObservedRunningTime="2025-07-10 00:21:46.982723351 +0000 UTC m=+1.287089824" Jul 10 00:21:46.983069 kubelet[2700]: I0710 00:21:46.982893 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-2654026dcf" podStartSLOduration=2.9828770479999998 podStartE2EDuration="2.982877048s" podCreationTimestamp="2025-07-10 00:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:46.969532925 +0000 UTC m=+1.273899405" watchObservedRunningTime="2025-07-10 00:21:46.982877048 +0000 UTC m=+1.287243530" Jul 10 00:21:47.896737 kubelet[2700]: E0710 00:21:47.896689 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:48.718655 sudo[1784]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:48.722157 sshd[1783]: Connection closed by 147.75.109.163 port 38292 Jul 10 00:21:48.723128 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:48.728362 systemd[1]: sshd@6-143.110.236.9:22-147.75.109.163:38292.service: Deactivated successfully. Jul 10 00:21:48.733169 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:21:48.733880 systemd[1]: session-7.scope: Consumed 5.302s CPU time, 222.7M memory peak. Jul 10 00:21:48.736103 systemd-logind[1541]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:21:48.738930 systemd-logind[1541]: Removed session 7. Jul 10 00:21:48.899273 kubelet[2700]: E0710 00:21:48.899162 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:50.762888 kubelet[2700]: I0710 00:21:50.762829 2700 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:21:50.763812 containerd[1565]: time="2025-07-10T00:21:50.763683021Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:21:50.764364 kubelet[2700]: I0710 00:21:50.763911 2700 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:21:50.878814 kubelet[2700]: E0710 00:21:50.878320 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:50.902864 kubelet[2700]: E0710 00:21:50.902824 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:51.406452 systemd[1]: Created slice kubepods-burstable-podf48a3781_e279_44c8_b050_8f86c8042e5d.slice - libcontainer container kubepods-burstable-podf48a3781_e279_44c8_b050_8f86c8042e5d.slice. Jul 10 00:21:51.428812 systemd[1]: Created slice kubepods-besteffort-pod921ac7f3_c383_44a9_9e98_c0f5397d9389.slice - libcontainer container kubepods-besteffort-pod921ac7f3_c383_44a9_9e98_c0f5397d9389.slice. Jul 10 00:21:51.465573 kubelet[2700]: I0710 00:21:51.465531 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-hostproc\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465573 kubelet[2700]: I0710 00:21:51.465569 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-cgroup\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465803 kubelet[2700]: I0710 00:21:51.465596 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f48a3781-e279-44c8-b050-8f86c8042e5d-clustermesh-secrets\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465803 kubelet[2700]: I0710 00:21:51.465672 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-config-path\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465803 kubelet[2700]: I0710 00:21:51.465704 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cni-path\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465803 kubelet[2700]: I0710 00:21:51.465728 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f48a3781-e279-44c8-b050-8f86c8042e5d-hubble-tls\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465803 kubelet[2700]: I0710 00:21:51.465752 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/921ac7f3-c383-44a9-9e98-c0f5397d9389-lib-modules\") pod \"kube-proxy-8tlzc\" (UID: \"921ac7f3-c383-44a9-9e98-c0f5397d9389\") " pod="kube-system/kube-proxy-8tlzc" Jul 10 00:21:51.465803 kubelet[2700]: I0710 00:21:51.465785 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-bpf-maps\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465993 kubelet[2700]: I0710 00:21:51.465813 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-etc-cni-netd\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465993 kubelet[2700]: I0710 00:21:51.465841 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-host-proc-sys-net\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465993 kubelet[2700]: I0710 00:21:51.465861 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-xtables-lock\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465993 kubelet[2700]: I0710 00:21:51.465883 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xck\" (UniqueName: \"kubernetes.io/projected/f48a3781-e279-44c8-b050-8f86c8042e5d-kube-api-access-j5xck\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.465993 kubelet[2700]: I0710 00:21:51.465905 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-host-proc-sys-kernel\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.466161 kubelet[2700]: I0710 00:21:51.465930 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/921ac7f3-c383-44a9-9e98-c0f5397d9389-xtables-lock\") pod \"kube-proxy-8tlzc\" (UID: \"921ac7f3-c383-44a9-9e98-c0f5397d9389\") " pod="kube-system/kube-proxy-8tlzc" Jul 10 00:21:51.466161 kubelet[2700]: I0710 00:21:51.465956 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-lib-modules\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.466161 kubelet[2700]: I0710 00:21:51.466005 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/921ac7f3-c383-44a9-9e98-c0f5397d9389-kube-proxy\") pod \"kube-proxy-8tlzc\" (UID: \"921ac7f3-c383-44a9-9e98-c0f5397d9389\") " pod="kube-system/kube-proxy-8tlzc" Jul 10 00:21:51.466161 kubelet[2700]: I0710 00:21:51.466032 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-run\") pod \"cilium-cctvd\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " pod="kube-system/cilium-cctvd" Jul 10 00:21:51.466161 kubelet[2700]: I0710 00:21:51.466058 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52qkz\" (UniqueName: \"kubernetes.io/projected/921ac7f3-c383-44a9-9e98-c0f5397d9389-kube-api-access-52qkz\") pod \"kube-proxy-8tlzc\" (UID: \"921ac7f3-c383-44a9-9e98-c0f5397d9389\") " pod="kube-system/kube-proxy-8tlzc" Jul 10 00:21:51.722187 kubelet[2700]: E0710 00:21:51.721945 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:51.724041 containerd[1565]: time="2025-07-10T00:21:51.723816569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cctvd,Uid:f48a3781-e279-44c8-b050-8f86c8042e5d,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:51.736803 kubelet[2700]: E0710 00:21:51.736756 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:51.738701 containerd[1565]: time="2025-07-10T00:21:51.737609753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8tlzc,Uid:921ac7f3-c383-44a9-9e98-c0f5397d9389,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:51.757311 containerd[1565]: time="2025-07-10T00:21:51.757043440Z" level=info msg="connecting to shim 7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a" address="unix:///run/containerd/s/a115f38af86180b9b88d5d756a22103e411081ecea3aff75db84eaaf2d6b7c7d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:51.781937 containerd[1565]: time="2025-07-10T00:21:51.781735135Z" level=info msg="connecting to shim fa8ff40e3a362895d9a05ca97b8eaa6b6321151e7132db6fa700db28daafa4f2" address="unix:///run/containerd/s/4e9b47b8ce02b01999007d058e24c338fcf39124057c52d0ef0176743aa578ed" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:51.817876 systemd[1]: Created slice kubepods-besteffort-podcfa4a743_f4f2_4b9a_809c_67302c3ed879.slice - libcontainer container kubepods-besteffort-podcfa4a743_f4f2_4b9a_809c_67302c3ed879.slice. Jul 10 00:21:51.839211 systemd[1]: Started cri-containerd-7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a.scope - libcontainer container 7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a. Jul 10 00:21:51.869353 kubelet[2700]: I0710 00:21:51.869307 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfa4a743-f4f2-4b9a-809c-67302c3ed879-cilium-config-path\") pod \"cilium-operator-5d85765b45-d7gtk\" (UID: \"cfa4a743-f4f2-4b9a-809c-67302c3ed879\") " pod="kube-system/cilium-operator-5d85765b45-d7gtk" Jul 10 00:21:51.870846 kubelet[2700]: I0710 00:21:51.870392 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmzh\" (UniqueName: \"kubernetes.io/projected/cfa4a743-f4f2-4b9a-809c-67302c3ed879-kube-api-access-jvmzh\") pod \"cilium-operator-5d85765b45-d7gtk\" (UID: \"cfa4a743-f4f2-4b9a-809c-67302c3ed879\") " pod="kube-system/cilium-operator-5d85765b45-d7gtk" Jul 10 00:21:51.887996 systemd[1]: Started cri-containerd-fa8ff40e3a362895d9a05ca97b8eaa6b6321151e7132db6fa700db28daafa4f2.scope - libcontainer container fa8ff40e3a362895d9a05ca97b8eaa6b6321151e7132db6fa700db28daafa4f2. Jul 10 00:21:51.922011 containerd[1565]: time="2025-07-10T00:21:51.921938949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cctvd,Uid:f48a3781-e279-44c8-b050-8f86c8042e5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\"" Jul 10 00:21:51.926318 kubelet[2700]: E0710 00:21:51.925713 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:51.930120 containerd[1565]: time="2025-07-10T00:21:51.928856693Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:21:51.935918 systemd-resolved[1401]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jul 10 00:21:51.951403 containerd[1565]: time="2025-07-10T00:21:51.951304206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8tlzc,Uid:921ac7f3-c383-44a9-9e98-c0f5397d9389,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa8ff40e3a362895d9a05ca97b8eaa6b6321151e7132db6fa700db28daafa4f2\"" Jul 10 00:21:51.952934 kubelet[2700]: E0710 00:21:51.952861 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:51.960197 containerd[1565]: time="2025-07-10T00:21:51.960111684Z" level=info msg="CreateContainer within sandbox \"fa8ff40e3a362895d9a05ca97b8eaa6b6321151e7132db6fa700db28daafa4f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:21:51.974699 containerd[1565]: time="2025-07-10T00:21:51.974151929Z" level=info msg="Container a082ec5164e3722f4d91e5361287e5c6bf26d6b08da4c379c771404a221b8e6d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:51.988025 containerd[1565]: time="2025-07-10T00:21:51.987956042Z" level=info msg="CreateContainer within sandbox \"fa8ff40e3a362895d9a05ca97b8eaa6b6321151e7132db6fa700db28daafa4f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a082ec5164e3722f4d91e5361287e5c6bf26d6b08da4c379c771404a221b8e6d\"" Jul 10 00:21:51.989236 containerd[1565]: time="2025-07-10T00:21:51.989190986Z" level=info msg="StartContainer for \"a082ec5164e3722f4d91e5361287e5c6bf26d6b08da4c379c771404a221b8e6d\"" Jul 10 00:21:51.992719 containerd[1565]: time="2025-07-10T00:21:51.992609858Z" level=info msg="connecting to shim a082ec5164e3722f4d91e5361287e5c6bf26d6b08da4c379c771404a221b8e6d" address="unix:///run/containerd/s/4e9b47b8ce02b01999007d058e24c338fcf39124057c52d0ef0176743aa578ed" protocol=ttrpc version=3 Jul 10 00:21:52.018970 systemd[1]: Started cri-containerd-a082ec5164e3722f4d91e5361287e5c6bf26d6b08da4c379c771404a221b8e6d.scope - libcontainer container a082ec5164e3722f4d91e5361287e5c6bf26d6b08da4c379c771404a221b8e6d. Jul 10 00:21:52.075325 containerd[1565]: time="2025-07-10T00:21:52.074860512Z" level=info msg="StartContainer for \"a082ec5164e3722f4d91e5361287e5c6bf26d6b08da4c379c771404a221b8e6d\" returns successfully" Jul 10 00:21:52.130219 kubelet[2700]: E0710 00:21:52.130138 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:52.133329 containerd[1565]: time="2025-07-10T00:21:52.133028121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d7gtk,Uid:cfa4a743-f4f2-4b9a-809c-67302c3ed879,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:52.156729 containerd[1565]: time="2025-07-10T00:21:52.156672999Z" level=info msg="connecting to shim 1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa" address="unix:///run/containerd/s/90c0352808e6a89dfe15285c30e8db868e793c3d64c16f86173b24cd460600af" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:52.196901 systemd[1]: Started cri-containerd-1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa.scope - libcontainer container 1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa. Jul 10 00:21:52.301875 containerd[1565]: time="2025-07-10T00:21:52.300516347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d7gtk,Uid:cfa4a743-f4f2-4b9a-809c-67302c3ed879,Namespace:kube-system,Attempt:0,} returns sandbox id \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\"" Jul 10 00:21:52.302971 kubelet[2700]: E0710 00:21:52.302940 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:52.913528 kubelet[2700]: E0710 00:21:52.913476 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:52.928819 kubelet[2700]: I0710 00:21:52.927996 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8tlzc" podStartSLOduration=1.927967068 podStartE2EDuration="1.927967068s" podCreationTimestamp="2025-07-10 00:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:52.927516841 +0000 UTC m=+7.231883322" watchObservedRunningTime="2025-07-10 00:21:52.927967068 +0000 UTC m=+7.232333550" Jul 10 00:21:54.727271 kubelet[2700]: E0710 00:21:54.726964 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:54.921620 kubelet[2700]: E0710 00:21:54.921545 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:57.651532 kubelet[2700]: E0710 00:21:57.651491 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:58.733324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567774767.mount: Deactivated successfully. Jul 10 00:22:01.165849 containerd[1565]: time="2025-07-10T00:22:01.165775661Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:01.167457 containerd[1565]: time="2025-07-10T00:22:01.167405382Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 00:22:01.167946 containerd[1565]: time="2025-07-10T00:22:01.167912657Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:01.169843 containerd[1565]: time="2025-07-10T00:22:01.169797919Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.239159746s" Jul 10 00:22:01.169843 containerd[1565]: time="2025-07-10T00:22:01.169840583Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:22:01.171699 containerd[1565]: time="2025-07-10T00:22:01.171657931Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:22:01.176202 containerd[1565]: time="2025-07-10T00:22:01.175671956Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:22:01.202780 containerd[1565]: time="2025-07-10T00:22:01.201516248Z" level=info msg="Container a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:01.205787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100230337.mount: Deactivated successfully. Jul 10 00:22:01.233661 containerd[1565]: time="2025-07-10T00:22:01.233544705Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\"" Jul 10 00:22:01.235510 containerd[1565]: time="2025-07-10T00:22:01.235159987Z" level=info msg="StartContainer for \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\"" Jul 10 00:22:01.239752 containerd[1565]: time="2025-07-10T00:22:01.239576174Z" level=info msg="connecting to shim a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207" address="unix:///run/containerd/s/a115f38af86180b9b88d5d756a22103e411081ecea3aff75db84eaaf2d6b7c7d" protocol=ttrpc version=3 Jul 10 00:22:01.272299 update_engine[1542]: I20250710 00:22:01.271669 1542 update_attempter.cc:509] Updating boot flags... Jul 10 00:22:01.344027 systemd[1]: Started cri-containerd-a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207.scope - libcontainer container a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207. Jul 10 00:22:01.461118 containerd[1565]: time="2025-07-10T00:22:01.457086500Z" level=info msg="StartContainer for \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\" returns successfully" Jul 10 00:22:01.484200 systemd[1]: cri-containerd-a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207.scope: Deactivated successfully. Jul 10 00:22:01.690696 containerd[1565]: time="2025-07-10T00:22:01.690607073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\" id:\"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\" pid:3120 exited_at:{seconds:1752106921 nanos:493851966}" Jul 10 00:22:01.700464 containerd[1565]: time="2025-07-10T00:22:01.700363210Z" level=info msg="received exit event container_id:\"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\" id:\"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\" pid:3120 exited_at:{seconds:1752106921 nanos:493851966}" Jul 10 00:22:01.759439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207-rootfs.mount: Deactivated successfully. Jul 10 00:22:01.946396 kubelet[2700]: E0710 00:22:01.946357 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:01.952283 containerd[1565]: time="2025-07-10T00:22:01.952142312Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:22:01.971676 containerd[1565]: time="2025-07-10T00:22:01.971416906Z" level=info msg="Container 8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:01.986524 containerd[1565]: time="2025-07-10T00:22:01.986411825Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\"" Jul 10 00:22:01.987709 containerd[1565]: time="2025-07-10T00:22:01.987368459Z" level=info msg="StartContainer for \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\"" Jul 10 00:22:01.989109 containerd[1565]: time="2025-07-10T00:22:01.989076910Z" level=info msg="connecting to shim 8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1" address="unix:///run/containerd/s/a115f38af86180b9b88d5d756a22103e411081ecea3aff75db84eaaf2d6b7c7d" protocol=ttrpc version=3 Jul 10 00:22:02.016029 systemd[1]: Started cri-containerd-8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1.scope - libcontainer container 8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1. Jul 10 00:22:02.068678 containerd[1565]: time="2025-07-10T00:22:02.068579744Z" level=info msg="StartContainer for \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\" returns successfully" Jul 10 00:22:02.089440 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:22:02.089881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:22:02.091070 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:22:02.093857 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:22:02.096140 systemd[1]: cri-containerd-8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1.scope: Deactivated successfully. Jul 10 00:22:02.100482 containerd[1565]: time="2025-07-10T00:22:02.100428457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\" id:\"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\" pid:3169 exited_at:{seconds:1752106922 nanos:98168980}" Jul 10 00:22:02.113910 containerd[1565]: time="2025-07-10T00:22:02.113848750Z" level=info msg="received exit event container_id:\"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\" id:\"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\" pid:3169 exited_at:{seconds:1752106922 nanos:98168980}" Jul 10 00:22:02.137325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:22:02.472147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580078145.mount: Deactivated successfully. Jul 10 00:22:02.956803 kubelet[2700]: E0710 00:22:02.956630 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:02.968303 containerd[1565]: time="2025-07-10T00:22:02.968235116Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:22:03.010796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount932661235.mount: Deactivated successfully. Jul 10 00:22:03.015916 containerd[1565]: time="2025-07-10T00:22:03.015482426Z" level=info msg="Container b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:03.035160 containerd[1565]: time="2025-07-10T00:22:03.035105277Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\"" Jul 10 00:22:03.038758 containerd[1565]: time="2025-07-10T00:22:03.038264689Z" level=info msg="StartContainer for \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\"" Jul 10 00:22:03.042419 containerd[1565]: time="2025-07-10T00:22:03.042343047Z" level=info msg="connecting to shim b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e" address="unix:///run/containerd/s/a115f38af86180b9b88d5d756a22103e411081ecea3aff75db84eaaf2d6b7c7d" protocol=ttrpc version=3 Jul 10 00:22:03.076054 systemd[1]: Started cri-containerd-b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e.scope - libcontainer container b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e. Jul 10 00:22:03.145975 systemd[1]: cri-containerd-b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e.scope: Deactivated successfully. Jul 10 00:22:03.148541 containerd[1565]: time="2025-07-10T00:22:03.148442755Z" level=info msg="received exit event container_id:\"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\" id:\"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\" pid:3229 exited_at:{seconds:1752106923 nanos:147613635}" Jul 10 00:22:03.150923 containerd[1565]: time="2025-07-10T00:22:03.150847145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\" id:\"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\" pid:3229 exited_at:{seconds:1752106923 nanos:147613635}" Jul 10 00:22:03.167873 containerd[1565]: time="2025-07-10T00:22:03.167719099Z" level=info msg="StartContainer for \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\" returns successfully" Jul 10 00:22:03.201941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221984068.mount: Deactivated successfully. Jul 10 00:22:03.280926 containerd[1565]: time="2025-07-10T00:22:03.280774978Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:03.281978 containerd[1565]: time="2025-07-10T00:22:03.281927285Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 00:22:03.282685 containerd[1565]: time="2025-07-10T00:22:03.282568427Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:03.284680 containerd[1565]: time="2025-07-10T00:22:03.284604314Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.11275776s" Jul 10 00:22:03.284680 containerd[1565]: time="2025-07-10T00:22:03.284661129Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:22:03.287965 containerd[1565]: time="2025-07-10T00:22:03.287903741Z" level=info msg="CreateContainer within sandbox \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:22:03.301552 containerd[1565]: time="2025-07-10T00:22:03.300860714Z" level=info msg="Container da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:03.314075 containerd[1565]: time="2025-07-10T00:22:03.313986408Z" level=info msg="CreateContainer within sandbox \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\"" Jul 10 00:22:03.315955 containerd[1565]: time="2025-07-10T00:22:03.315010951Z" level=info msg="StartContainer for \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\"" Jul 10 00:22:03.317341 containerd[1565]: time="2025-07-10T00:22:03.317184053Z" level=info msg="connecting to shim da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72" address="unix:///run/containerd/s/90c0352808e6a89dfe15285c30e8db868e793c3d64c16f86173b24cd460600af" protocol=ttrpc version=3 Jul 10 00:22:03.344922 systemd[1]: Started cri-containerd-da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72.scope - libcontainer container da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72. Jul 10 00:22:03.394769 containerd[1565]: time="2025-07-10T00:22:03.394710042Z" level=info msg="StartContainer for \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" returns successfully" Jul 10 00:22:03.970711 kubelet[2700]: E0710 00:22:03.968412 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:03.984073 kubelet[2700]: E0710 00:22:03.983778 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:03.988053 containerd[1565]: time="2025-07-10T00:22:03.987995388Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:22:04.006665 containerd[1565]: time="2025-07-10T00:22:04.004663519Z" level=info msg="Container be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:04.022168 containerd[1565]: time="2025-07-10T00:22:04.022107114Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\"" Jul 10 00:22:04.024420 containerd[1565]: time="2025-07-10T00:22:04.023005400Z" level=info msg="StartContainer for \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\"" Jul 10 00:22:04.024943 containerd[1565]: time="2025-07-10T00:22:04.024893570Z" level=info msg="connecting to shim be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517" address="unix:///run/containerd/s/a115f38af86180b9b88d5d756a22103e411081ecea3aff75db84eaaf2d6b7c7d" protocol=ttrpc version=3 Jul 10 00:22:04.066967 systemd[1]: Started cri-containerd-be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517.scope - libcontainer container be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517. Jul 10 00:22:04.164327 systemd[1]: cri-containerd-be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517.scope: Deactivated successfully. Jul 10 00:22:04.168145 containerd[1565]: time="2025-07-10T00:22:04.168071597Z" level=info msg="StartContainer for \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\" returns successfully" Jul 10 00:22:04.170712 containerd[1565]: time="2025-07-10T00:22:04.169924683Z" level=info msg="received exit event container_id:\"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\" id:\"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\" pid:3304 exited_at:{seconds:1752106924 nanos:168881989}" Jul 10 00:22:04.172454 containerd[1565]: time="2025-07-10T00:22:04.172057623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\" id:\"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\" pid:3304 exited_at:{seconds:1752106924 nanos:168881989}" Jul 10 00:22:04.223141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517-rootfs.mount: Deactivated successfully. Jul 10 00:22:04.326586 kubelet[2700]: I0710 00:22:04.325956 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-d7gtk" podStartSLOduration=2.344019661 podStartE2EDuration="13.325929991s" podCreationTimestamp="2025-07-10 00:21:51 +0000 UTC" firstStartedPulling="2025-07-10 00:21:52.303987566 +0000 UTC m=+6.608354041" lastFinishedPulling="2025-07-10 00:22:03.285897894 +0000 UTC m=+17.590264371" observedRunningTime="2025-07-10 00:22:04.214408003 +0000 UTC m=+18.518774493" watchObservedRunningTime="2025-07-10 00:22:04.325929991 +0000 UTC m=+18.630296466" Jul 10 00:22:04.995631 kubelet[2700]: E0710 00:22:04.994210 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:04.998304 kubelet[2700]: E0710 00:22:04.997298 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:05.007314 containerd[1565]: time="2025-07-10T00:22:05.006919882Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:22:05.028697 containerd[1565]: time="2025-07-10T00:22:05.028622617Z" level=info msg="Container b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:05.046178 containerd[1565]: time="2025-07-10T00:22:05.046119489Z" level=info msg="CreateContainer within sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\"" Jul 10 00:22:05.047635 containerd[1565]: time="2025-07-10T00:22:05.047246506Z" level=info msg="StartContainer for \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\"" Jul 10 00:22:05.048913 containerd[1565]: time="2025-07-10T00:22:05.048863956Z" level=info msg="connecting to shim b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e" address="unix:///run/containerd/s/a115f38af86180b9b88d5d756a22103e411081ecea3aff75db84eaaf2d6b7c7d" protocol=ttrpc version=3 Jul 10 00:22:05.083075 systemd[1]: Started cri-containerd-b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e.scope - libcontainer container b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e. Jul 10 00:22:05.137559 containerd[1565]: time="2025-07-10T00:22:05.137520543Z" level=info msg="StartContainer for \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" returns successfully" Jul 10 00:22:05.250787 containerd[1565]: time="2025-07-10T00:22:05.249392334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" id:\"3a777a299a8f0ccf5749c3ced4b36dee7b05711c8f91b55f63dcc2c235dd133b\" pid:3372 exited_at:{seconds:1752106925 nanos:248928582}" Jul 10 00:22:05.273889 kubelet[2700]: I0710 00:22:05.273737 2700 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:22:05.348438 systemd[1]: Created slice kubepods-burstable-pod56e0a3e4_dec2_4c45_aef8_0464474071fe.slice - libcontainer container kubepods-burstable-pod56e0a3e4_dec2_4c45_aef8_0464474071fe.slice. Jul 10 00:22:05.359540 systemd[1]: Created slice kubepods-burstable-podafcf53aa_dd14_4252_aaed_a0045ee4e6a2.slice - libcontainer container kubepods-burstable-podafcf53aa_dd14_4252_aaed_a0045ee4e6a2.slice. Jul 10 00:22:05.377493 kubelet[2700]: I0710 00:22:05.377136 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch5qj\" (UniqueName: \"kubernetes.io/projected/afcf53aa-dd14-4252-aaed-a0045ee4e6a2-kube-api-access-ch5qj\") pod \"coredns-7c65d6cfc9-fkqlg\" (UID: \"afcf53aa-dd14-4252-aaed-a0045ee4e6a2\") " pod="kube-system/coredns-7c65d6cfc9-fkqlg" Jul 10 00:22:05.377493 kubelet[2700]: I0710 00:22:05.377207 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-844st\" (UniqueName: \"kubernetes.io/projected/56e0a3e4-dec2-4c45-aef8-0464474071fe-kube-api-access-844st\") pod \"coredns-7c65d6cfc9-d8mll\" (UID: \"56e0a3e4-dec2-4c45-aef8-0464474071fe\") " pod="kube-system/coredns-7c65d6cfc9-d8mll" Jul 10 00:22:05.377493 kubelet[2700]: I0710 00:22:05.377244 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56e0a3e4-dec2-4c45-aef8-0464474071fe-config-volume\") pod \"coredns-7c65d6cfc9-d8mll\" (UID: \"56e0a3e4-dec2-4c45-aef8-0464474071fe\") " pod="kube-system/coredns-7c65d6cfc9-d8mll" Jul 10 00:22:05.377493 kubelet[2700]: I0710 00:22:05.377269 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afcf53aa-dd14-4252-aaed-a0045ee4e6a2-config-volume\") pod \"coredns-7c65d6cfc9-fkqlg\" (UID: \"afcf53aa-dd14-4252-aaed-a0045ee4e6a2\") " pod="kube-system/coredns-7c65d6cfc9-fkqlg" Jul 10 00:22:05.657617 kubelet[2700]: E0710 00:22:05.657575 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:05.658594 containerd[1565]: time="2025-07-10T00:22:05.658498404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d8mll,Uid:56e0a3e4-dec2-4c45-aef8-0464474071fe,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:05.670984 kubelet[2700]: E0710 00:22:05.670870 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:05.686766 containerd[1565]: time="2025-07-10T00:22:05.685466751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fkqlg,Uid:afcf53aa-dd14-4252-aaed-a0045ee4e6a2,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:06.036831 kubelet[2700]: E0710 00:22:06.035337 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:06.071695 kubelet[2700]: I0710 00:22:06.070491 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cctvd" podStartSLOduration=5.8263318250000005 podStartE2EDuration="15.070467527s" podCreationTimestamp="2025-07-10 00:21:51 +0000 UTC" firstStartedPulling="2025-07-10 00:21:51.927167806 +0000 UTC m=+6.231534265" lastFinishedPulling="2025-07-10 00:22:01.171303495 +0000 UTC m=+15.475669967" observedRunningTime="2025-07-10 00:22:06.069376128 +0000 UTC m=+20.373742612" watchObservedRunningTime="2025-07-10 00:22:06.070467527 +0000 UTC m=+20.374834185" Jul 10 00:22:07.035796 kubelet[2700]: E0710 00:22:07.035675 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:07.773359 systemd-networkd[1450]: cilium_host: Link UP Jul 10 00:22:07.775905 systemd-networkd[1450]: cilium_net: Link UP Jul 10 00:22:07.776202 systemd-networkd[1450]: cilium_net: Gained carrier Jul 10 00:22:07.776409 systemd-networkd[1450]: cilium_host: Gained carrier Jul 10 00:22:07.805909 systemd-networkd[1450]: cilium_host: Gained IPv6LL Jul 10 00:22:07.947274 systemd-networkd[1450]: cilium_vxlan: Link UP Jul 10 00:22:07.947288 systemd-networkd[1450]: cilium_vxlan: Gained carrier Jul 10 00:22:07.953482 systemd-networkd[1450]: cilium_net: Gained IPv6LL Jul 10 00:22:08.039219 kubelet[2700]: E0710 00:22:08.039110 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:08.396723 kernel: NET: Registered PF_ALG protocol family Jul 10 00:22:09.209108 systemd-networkd[1450]: cilium_vxlan: Gained IPv6LL Jul 10 00:22:09.414395 systemd-networkd[1450]: lxc_health: Link UP Jul 10 00:22:09.427578 systemd-networkd[1450]: lxc_health: Gained carrier Jul 10 00:22:09.724322 kubelet[2700]: E0710 00:22:09.724242 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:09.738390 systemd-networkd[1450]: lxc6d1dd750c488: Link UP Jul 10 00:22:09.746910 kernel: eth0: renamed from tmpd607d Jul 10 00:22:09.759373 systemd-networkd[1450]: lxc6d1dd750c488: Gained carrier Jul 10 00:22:09.781760 kernel: eth0: renamed from tmp5e67c Jul 10 00:22:09.785932 systemd-networkd[1450]: lxc926e5a589d0c: Link UP Jul 10 00:22:09.787845 systemd-networkd[1450]: lxc926e5a589d0c: Gained carrier Jul 10 00:22:10.044020 kubelet[2700]: E0710 00:22:10.043907 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:11.001045 systemd-networkd[1450]: lxc_health: Gained IPv6LL Jul 10 00:22:11.046312 kubelet[2700]: E0710 00:22:11.046262 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:11.192894 systemd-networkd[1450]: lxc926e5a589d0c: Gained IPv6LL Jul 10 00:22:11.512952 systemd-networkd[1450]: lxc6d1dd750c488: Gained IPv6LL Jul 10 00:22:14.403678 containerd[1565]: time="2025-07-10T00:22:14.403012958Z" level=info msg="connecting to shim d607d8ce0478a771cac3a9977b6a0d01ce54e7f7a2f408666c918698aa75bb14" address="unix:///run/containerd/s/b30f4f2aa45a3efdef4a191057ff848e0a830e7be2ddd3b85fb82a5484b80764" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:14.422063 containerd[1565]: time="2025-07-10T00:22:14.421939163Z" level=info msg="connecting to shim 5e67c6a97d22afe3b9ff6037a272ec448142d87b702a884ddc6334f66aba514d" address="unix:///run/containerd/s/4ba2cad579db084dbd69658733c4bd783a6a128d65dd2733838cc5da28839d56" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:14.483163 systemd[1]: Started cri-containerd-d607d8ce0478a771cac3a9977b6a0d01ce54e7f7a2f408666c918698aa75bb14.scope - libcontainer container d607d8ce0478a771cac3a9977b6a0d01ce54e7f7a2f408666c918698aa75bb14. Jul 10 00:22:14.496799 systemd[1]: Started cri-containerd-5e67c6a97d22afe3b9ff6037a272ec448142d87b702a884ddc6334f66aba514d.scope - libcontainer container 5e67c6a97d22afe3b9ff6037a272ec448142d87b702a884ddc6334f66aba514d. Jul 10 00:22:14.590528 containerd[1565]: time="2025-07-10T00:22:14.590400471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d8mll,Uid:56e0a3e4-dec2-4c45-aef8-0464474071fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"d607d8ce0478a771cac3a9977b6a0d01ce54e7f7a2f408666c918698aa75bb14\"" Jul 10 00:22:14.591887 kubelet[2700]: E0710 00:22:14.591850 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:14.595950 containerd[1565]: time="2025-07-10T00:22:14.595469480Z" level=info msg="CreateContainer within sandbox \"d607d8ce0478a771cac3a9977b6a0d01ce54e7f7a2f408666c918698aa75bb14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:22:14.618826 containerd[1565]: time="2025-07-10T00:22:14.618772085Z" level=info msg="Container be172c779408e7ee3e1485e123cddaa6965f5303d98bf3b6ca39e9f834ccc71e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:14.627270 containerd[1565]: time="2025-07-10T00:22:14.627205148Z" level=info msg="CreateContainer within sandbox \"d607d8ce0478a771cac3a9977b6a0d01ce54e7f7a2f408666c918698aa75bb14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be172c779408e7ee3e1485e123cddaa6965f5303d98bf3b6ca39e9f834ccc71e\"" Jul 10 00:22:14.629462 containerd[1565]: time="2025-07-10T00:22:14.629415428Z" level=info msg="StartContainer for \"be172c779408e7ee3e1485e123cddaa6965f5303d98bf3b6ca39e9f834ccc71e\"" Jul 10 00:22:14.631669 containerd[1565]: time="2025-07-10T00:22:14.631491937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fkqlg,Uid:afcf53aa-dd14-4252-aaed-a0045ee4e6a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e67c6a97d22afe3b9ff6037a272ec448142d87b702a884ddc6334f66aba514d\"" Jul 10 00:22:14.633445 containerd[1565]: time="2025-07-10T00:22:14.633391212Z" level=info msg="connecting to shim be172c779408e7ee3e1485e123cddaa6965f5303d98bf3b6ca39e9f834ccc71e" address="unix:///run/containerd/s/b30f4f2aa45a3efdef4a191057ff848e0a830e7be2ddd3b85fb82a5484b80764" protocol=ttrpc version=3 Jul 10 00:22:14.633692 kubelet[2700]: E0710 00:22:14.633414 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:14.638689 containerd[1565]: time="2025-07-10T00:22:14.638444087Z" level=info msg="CreateContainer within sandbox \"5e67c6a97d22afe3b9ff6037a272ec448142d87b702a884ddc6334f66aba514d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:22:14.648622 containerd[1565]: time="2025-07-10T00:22:14.648540303Z" level=info msg="Container c23ce80952ba30fab675a11c99e661bb3d8ee7bc067a236bc9b1e6165a50e11d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:14.666933 systemd[1]: Started cri-containerd-be172c779408e7ee3e1485e123cddaa6965f5303d98bf3b6ca39e9f834ccc71e.scope - libcontainer container be172c779408e7ee3e1485e123cddaa6965f5303d98bf3b6ca39e9f834ccc71e. Jul 10 00:22:14.670634 containerd[1565]: time="2025-07-10T00:22:14.670341972Z" level=info msg="CreateContainer within sandbox \"5e67c6a97d22afe3b9ff6037a272ec448142d87b702a884ddc6334f66aba514d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c23ce80952ba30fab675a11c99e661bb3d8ee7bc067a236bc9b1e6165a50e11d\"" Jul 10 00:22:14.684971 containerd[1565]: time="2025-07-10T00:22:14.684837220Z" level=info msg="StartContainer for \"c23ce80952ba30fab675a11c99e661bb3d8ee7bc067a236bc9b1e6165a50e11d\"" Jul 10 00:22:14.694314 containerd[1565]: time="2025-07-10T00:22:14.694182075Z" level=info msg="connecting to shim c23ce80952ba30fab675a11c99e661bb3d8ee7bc067a236bc9b1e6165a50e11d" address="unix:///run/containerd/s/4ba2cad579db084dbd69658733c4bd783a6a128d65dd2733838cc5da28839d56" protocol=ttrpc version=3 Jul 10 00:22:14.731428 systemd[1]: Started cri-containerd-c23ce80952ba30fab675a11c99e661bb3d8ee7bc067a236bc9b1e6165a50e11d.scope - libcontainer container c23ce80952ba30fab675a11c99e661bb3d8ee7bc067a236bc9b1e6165a50e11d. Jul 10 00:22:14.741223 containerd[1565]: time="2025-07-10T00:22:14.741179036Z" level=info msg="StartContainer for \"be172c779408e7ee3e1485e123cddaa6965f5303d98bf3b6ca39e9f834ccc71e\" returns successfully" Jul 10 00:22:14.781918 containerd[1565]: time="2025-07-10T00:22:14.781844613Z" level=info msg="StartContainer for \"c23ce80952ba30fab675a11c99e661bb3d8ee7bc067a236bc9b1e6165a50e11d\" returns successfully" Jul 10 00:22:15.061950 kubelet[2700]: E0710 00:22:15.061781 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:15.069234 kubelet[2700]: E0710 00:22:15.069172 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:15.084479 kubelet[2700]: I0710 00:22:15.084399 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fkqlg" podStartSLOduration=24.084371679 podStartE2EDuration="24.084371679s" podCreationTimestamp="2025-07-10 00:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:15.081907066 +0000 UTC m=+29.386273598" watchObservedRunningTime="2025-07-10 00:22:15.084371679 +0000 UTC m=+29.388738160" Jul 10 00:22:15.379069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4048336633.mount: Deactivated successfully. Jul 10 00:22:15.675482 kubelet[2700]: I0710 00:22:15.674823 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d8mll" podStartSLOduration=24.674380642 podStartE2EDuration="24.674380642s" podCreationTimestamp="2025-07-10 00:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:15.108514559 +0000 UTC m=+29.412881043" watchObservedRunningTime="2025-07-10 00:22:15.674380642 +0000 UTC m=+29.978747116" Jul 10 00:22:16.071567 kubelet[2700]: E0710 00:22:16.071534 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:16.072250 kubelet[2700]: E0710 00:22:16.071852 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:17.073587 kubelet[2700]: E0710 00:22:17.073519 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:17.074576 kubelet[2700]: E0710 00:22:17.074346 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:28.351989 systemd[1]: Started sshd@7-143.110.236.9:22-147.75.109.163:54736.service - OpenSSH per-connection server daemon (147.75.109.163:54736). Jul 10 00:22:28.449232 sshd[4032]: Accepted publickey for core from 147.75.109.163 port 54736 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:22:28.451540 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:28.460133 systemd-logind[1541]: New session 8 of user core. Jul 10 00:22:28.467961 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:22:29.071757 sshd[4034]: Connection closed by 147.75.109.163 port 54736 Jul 10 00:22:29.072553 sshd-session[4032]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:29.078580 systemd[1]: sshd@7-143.110.236.9:22-147.75.109.163:54736.service: Deactivated successfully. Jul 10 00:22:29.081851 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:22:29.083274 systemd-logind[1541]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:22:29.085605 systemd-logind[1541]: Removed session 8. Jul 10 00:22:34.098744 systemd[1]: Started sshd@8-143.110.236.9:22-147.75.109.163:54746.service - OpenSSH per-connection server daemon (147.75.109.163:54746). Jul 10 00:22:34.181280 sshd[4046]: Accepted publickey for core from 147.75.109.163 port 54746 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:22:34.182966 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:34.189983 systemd-logind[1541]: New session 9 of user core. Jul 10 00:22:34.200927 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:22:34.349990 sshd[4048]: Connection closed by 147.75.109.163 port 54746 Jul 10 00:22:34.350903 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:34.356931 systemd[1]: sshd@8-143.110.236.9:22-147.75.109.163:54746.service: Deactivated successfully. Jul 10 00:22:34.360335 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:22:34.361982 systemd-logind[1541]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:22:34.364939 systemd-logind[1541]: Removed session 9. Jul 10 00:22:39.366525 systemd[1]: Started sshd@9-143.110.236.9:22-147.75.109.163:57500.service - OpenSSH per-connection server daemon (147.75.109.163:57500). Jul 10 00:22:39.433750 sshd[4061]: Accepted publickey for core from 147.75.109.163 port 57500 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:22:39.435586 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:39.441739 systemd-logind[1541]: New session 10 of user core. Jul 10 00:22:39.449956 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:22:39.605419 sshd[4063]: Connection closed by 147.75.109.163 port 57500 Jul 10 00:22:39.604831 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:39.609628 systemd-logind[1541]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:22:39.610155 systemd[1]: sshd@9-143.110.236.9:22-147.75.109.163:57500.service: Deactivated successfully. Jul 10 00:22:39.613744 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:22:39.616963 systemd-logind[1541]: Removed session 10. Jul 10 00:22:44.623825 systemd[1]: Started sshd@10-143.110.236.9:22-147.75.109.163:57504.service - OpenSSH per-connection server daemon (147.75.109.163:57504). Jul 10 00:22:44.724374 sshd[4077]: Accepted publickey for core from 147.75.109.163 port 57504 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:22:44.726951 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:44.733723 systemd-logind[1541]: New session 11 of user core. Jul 10 00:22:44.740927 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:22:44.895256 sshd[4079]: Connection closed by 147.75.109.163 port 57504 Jul 10 00:22:44.895919 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:44.907303 systemd[1]: sshd@10-143.110.236.9:22-147.75.109.163:57504.service: Deactivated successfully. Jul 10 00:22:44.910321 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:22:44.912364 systemd-logind[1541]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:22:44.916132 systemd[1]: Started sshd@11-143.110.236.9:22-147.75.109.163:57520.service - OpenSSH per-connection server daemon (147.75.109.163:57520). Jul 10 00:22:44.919215 systemd-logind[1541]: Removed session 11. Jul 10 00:22:44.982298 sshd[4092]: Accepted publickey for core from 147.75.109.163 port 57520 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:22:44.984787 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:44.990452 systemd-logind[1541]: New session 12 of user core. Jul 10 00:22:45.006626 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:22:45.210918 sshd[4094]: Connection closed by 147.75.109.163 port 57520 Jul 10 00:22:45.214865 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:45.225629 systemd[1]: sshd@11-143.110.236.9:22-147.75.109.163:57520.service: Deactivated successfully. Jul 10 00:22:45.230387 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:22:45.235719 systemd-logind[1541]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:22:45.243093 systemd[1]: Started sshd@12-143.110.236.9:22-147.75.109.163:57530.service - OpenSSH per-connection server daemon (147.75.109.163:57530). Jul 10 00:22:45.249484 systemd-logind[1541]: Removed session 12. Jul 10 00:22:45.323926 sshd[4104]: Accepted publickey for core from 147.75.109.163 port 57530 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:22:45.326075 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:45.333362 systemd-logind[1541]: New session 13 of user core. Jul 10 00:22:45.339160 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:22:45.478154 sshd[4106]: Connection closed by 147.75.109.163 port 57530 Jul 10 00:22:45.479307 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:45.484950 systemd[1]: sshd@12-143.110.236.9:22-147.75.109.163:57530.service: Deactivated successfully. Jul 10 00:22:45.488479 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:22:45.492204 systemd-logind[1541]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:22:45.493839 systemd-logind[1541]: Removed session 13. Jul 10 00:22:50.496001 systemd[1]: Started sshd@13-143.110.236.9:22-147.75.109.163:47486.service - OpenSSH per-connection server daemon (147.75.109.163:47486). Jul 10 00:22:50.564970 sshd[4121]: Accepted publickey for core from 147.75.109.163 port 47486 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:22:50.566849 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:50.574374 systemd-logind[1541]: New session 14 of user core. Jul 10 00:22:50.578937 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:22:50.717162 sshd[4123]: Connection closed by 147.75.109.163 port 47486 Jul 10 00:22:50.719546 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:50.724964 systemd[1]: sshd@13-143.110.236.9:22-147.75.109.163:47486.service: Deactivated successfully. Jul 10 00:22:50.727585 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:22:50.729262 systemd-logind[1541]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:22:50.731986 systemd-logind[1541]: Removed session 14. Jul 10 00:22:54.863102 kubelet[2700]: E0710 00:22:54.862864 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:22:55.737994 systemd[1]: Started sshd@14-143.110.236.9:22-147.75.109.163:47490.service - OpenSSH per-connection server daemon (147.75.109.163:47490). Jul 10 00:22:55.817012 sshd[4137]: Accepted publickey for core from 147.75.109.163 port 47490 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:22:55.819044 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:55.827737 systemd-logind[1541]: New session 15 of user core. Jul 10 00:22:55.835968 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:22:56.004290 sshd[4139]: Connection closed by 147.75.109.163 port 47490 Jul 10 00:22:56.005591 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:56.011589 systemd[1]: sshd@14-143.110.236.9:22-147.75.109.163:47490.service: Deactivated successfully. Jul 10 00:22:56.016670 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:22:56.021956 systemd-logind[1541]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:22:56.023888 systemd-logind[1541]: Removed session 15. Jul 10 00:23:01.021894 systemd[1]: Started sshd@15-143.110.236.9:22-147.75.109.163:50188.service - OpenSSH per-connection server daemon (147.75.109.163:50188). Jul 10 00:23:01.113854 sshd[4151]: Accepted publickey for core from 147.75.109.163 port 50188 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:01.115574 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:01.121803 systemd-logind[1541]: New session 16 of user core. Jul 10 00:23:01.129977 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:23:01.291498 sshd[4153]: Connection closed by 147.75.109.163 port 50188 Jul 10 00:23:01.292710 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:01.306752 systemd[1]: sshd@15-143.110.236.9:22-147.75.109.163:50188.service: Deactivated successfully. Jul 10 00:23:01.310045 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:23:01.311449 systemd-logind[1541]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:23:01.315387 systemd-logind[1541]: Removed session 16. Jul 10 00:23:01.318005 systemd[1]: Started sshd@16-143.110.236.9:22-147.75.109.163:50202.service - OpenSSH per-connection server daemon (147.75.109.163:50202). Jul 10 00:23:01.388091 sshd[4165]: Accepted publickey for core from 147.75.109.163 port 50202 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:01.390140 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:01.397489 systemd-logind[1541]: New session 17 of user core. Jul 10 00:23:01.405008 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:23:01.767807 sshd[4167]: Connection closed by 147.75.109.163 port 50202 Jul 10 00:23:01.769066 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:01.784125 systemd[1]: sshd@16-143.110.236.9:22-147.75.109.163:50202.service: Deactivated successfully. Jul 10 00:23:01.788284 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:23:01.790788 systemd-logind[1541]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:23:01.796821 systemd-logind[1541]: Removed session 17. Jul 10 00:23:01.800101 systemd[1]: Started sshd@17-143.110.236.9:22-147.75.109.163:50204.service - OpenSSH per-connection server daemon (147.75.109.163:50204). Jul 10 00:23:01.871360 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 50204 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:01.873546 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:01.881803 systemd-logind[1541]: New session 18 of user core. Jul 10 00:23:01.885978 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:23:03.873713 kubelet[2700]: E0710 00:23:03.873595 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:03.996424 sshd[4179]: Connection closed by 147.75.109.163 port 50204 Jul 10 00:23:03.995824 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:04.014700 systemd[1]: sshd@17-143.110.236.9:22-147.75.109.163:50204.service: Deactivated successfully. Jul 10 00:23:04.020453 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:23:04.025587 systemd-logind[1541]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:23:04.037236 systemd[1]: Started sshd@18-143.110.236.9:22-147.75.109.163:50216.service - OpenSSH per-connection server daemon (147.75.109.163:50216). Jul 10 00:23:04.042476 systemd-logind[1541]: Removed session 18. Jul 10 00:23:04.155436 sshd[4195]: Accepted publickey for core from 147.75.109.163 port 50216 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:04.157503 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:04.166266 systemd-logind[1541]: New session 19 of user core. Jul 10 00:23:04.174008 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:23:04.525122 sshd[4197]: Connection closed by 147.75.109.163 port 50216 Jul 10 00:23:04.526756 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:04.539541 systemd[1]: sshd@18-143.110.236.9:22-147.75.109.163:50216.service: Deactivated successfully. Jul 10 00:23:04.545669 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:23:04.548167 systemd-logind[1541]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:23:04.554877 systemd[1]: Started sshd@19-143.110.236.9:22-147.75.109.163:50228.service - OpenSSH per-connection server daemon (147.75.109.163:50228). Jul 10 00:23:04.559518 systemd-logind[1541]: Removed session 19. Jul 10 00:23:04.634510 sshd[4207]: Accepted publickey for core from 147.75.109.163 port 50228 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:04.637037 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:04.645749 systemd-logind[1541]: New session 20 of user core. Jul 10 00:23:04.652110 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:23:04.805485 sshd[4209]: Connection closed by 147.75.109.163 port 50228 Jul 10 00:23:04.806943 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:04.812502 systemd[1]: sshd@19-143.110.236.9:22-147.75.109.163:50228.service: Deactivated successfully. Jul 10 00:23:04.816655 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:23:04.822299 systemd-logind[1541]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:23:04.823494 systemd-logind[1541]: Removed session 20. Jul 10 00:23:09.824981 systemd[1]: Started sshd@20-143.110.236.9:22-147.75.109.163:59156.service - OpenSSH per-connection server daemon (147.75.109.163:59156). Jul 10 00:23:09.861829 kubelet[2700]: E0710 00:23:09.861768 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:09.904940 sshd[4225]: Accepted publickey for core from 147.75.109.163 port 59156 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:09.907116 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:09.915224 systemd-logind[1541]: New session 21 of user core. Jul 10 00:23:09.922969 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:23:10.070348 sshd[4227]: Connection closed by 147.75.109.163 port 59156 Jul 10 00:23:10.071508 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:10.077160 systemd[1]: sshd@20-143.110.236.9:22-147.75.109.163:59156.service: Deactivated successfully. Jul 10 00:23:10.081780 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:23:10.085617 systemd-logind[1541]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:23:10.088623 systemd-logind[1541]: Removed session 21. Jul 10 00:23:12.860612 kubelet[2700]: E0710 00:23:12.860497 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:15.090441 systemd[1]: Started sshd@21-143.110.236.9:22-147.75.109.163:59158.service - OpenSSH per-connection server daemon (147.75.109.163:59158). Jul 10 00:23:15.170356 sshd[4238]: Accepted publickey for core from 147.75.109.163 port 59158 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:15.172255 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:15.178950 systemd-logind[1541]: New session 22 of user core. Jul 10 00:23:15.188938 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:23:15.351246 sshd[4240]: Connection closed by 147.75.109.163 port 59158 Jul 10 00:23:15.352252 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:15.358535 systemd[1]: sshd@21-143.110.236.9:22-147.75.109.163:59158.service: Deactivated successfully. Jul 10 00:23:15.361186 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:23:15.362736 systemd-logind[1541]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:23:15.365299 systemd-logind[1541]: Removed session 22. Jul 10 00:23:16.861551 kubelet[2700]: E0710 00:23:16.861499 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:17.861677 kubelet[2700]: E0710 00:23:17.861506 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:20.368358 systemd[1]: Started sshd@22-143.110.236.9:22-147.75.109.163:58284.service - OpenSSH per-connection server daemon (147.75.109.163:58284). Jul 10 00:23:20.440363 sshd[4252]: Accepted publickey for core from 147.75.109.163 port 58284 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:20.442757 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:20.449672 systemd-logind[1541]: New session 23 of user core. Jul 10 00:23:20.459027 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:23:20.613960 sshd[4254]: Connection closed by 147.75.109.163 port 58284 Jul 10 00:23:20.615113 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:20.620124 systemd[1]: sshd@22-143.110.236.9:22-147.75.109.163:58284.service: Deactivated successfully. Jul 10 00:23:20.625263 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:23:20.629078 systemd-logind[1541]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:23:20.631577 systemd-logind[1541]: Removed session 23. Jul 10 00:23:22.861564 kubelet[2700]: E0710 00:23:22.861463 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:25.635132 systemd[1]: Started sshd@23-143.110.236.9:22-147.75.109.163:58296.service - OpenSSH per-connection server daemon (147.75.109.163:58296). Jul 10 00:23:25.722764 sshd[4269]: Accepted publickey for core from 147.75.109.163 port 58296 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:25.725287 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:25.731804 systemd-logind[1541]: New session 24 of user core. Jul 10 00:23:25.736983 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:23:25.937749 sshd[4271]: Connection closed by 147.75.109.163 port 58296 Jul 10 00:23:25.938446 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:25.953314 systemd[1]: sshd@23-143.110.236.9:22-147.75.109.163:58296.service: Deactivated successfully. Jul 10 00:23:25.956389 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:23:25.960175 systemd-logind[1541]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:23:25.964720 systemd[1]: Started sshd@24-143.110.236.9:22-147.75.109.163:45396.service - OpenSSH per-connection server daemon (147.75.109.163:45396). Jul 10 00:23:25.967917 systemd-logind[1541]: Removed session 24. Jul 10 00:23:26.046521 sshd[4283]: Accepted publickey for core from 147.75.109.163 port 45396 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:26.049133 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:26.057012 systemd-logind[1541]: New session 25 of user core. Jul 10 00:23:26.069050 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:23:27.725486 containerd[1565]: time="2025-07-10T00:23:27.724798140Z" level=info msg="StopContainer for \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" with timeout 30 (s)" Jul 10 00:23:27.754659 containerd[1565]: time="2025-07-10T00:23:27.754584311Z" level=info msg="Stop container \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" with signal terminated" Jul 10 00:23:27.779849 systemd[1]: cri-containerd-da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72.scope: Deactivated successfully. Jul 10 00:23:27.782478 containerd[1565]: time="2025-07-10T00:23:27.782419844Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:23:27.785711 containerd[1565]: time="2025-07-10T00:23:27.785602446Z" level=info msg="received exit event container_id:\"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" id:\"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" pid:3272 exited_at:{seconds:1752107007 nanos:785242716}" Jul 10 00:23:27.786300 containerd[1565]: time="2025-07-10T00:23:27.786097011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" id:\"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" pid:3272 exited_at:{seconds:1752107007 nanos:785242716}" Jul 10 00:23:27.790032 containerd[1565]: time="2025-07-10T00:23:27.789959650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" id:\"4f63fdcc74d581927d2fcc25a596633f66431471153ddf2863eefcc11e239526\" pid:4305 exited_at:{seconds:1752107007 nanos:788821505}" Jul 10 00:23:27.794834 containerd[1565]: time="2025-07-10T00:23:27.794662275Z" level=info msg="StopContainer for \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" with timeout 2 (s)" Jul 10 00:23:27.795400 containerd[1565]: time="2025-07-10T00:23:27.795345272Z" level=info msg="Stop container \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" with signal terminated" Jul 10 00:23:27.809731 systemd-networkd[1450]: lxc_health: Link DOWN Jul 10 00:23:27.809741 systemd-networkd[1450]: lxc_health: Lost carrier Jul 10 00:23:27.831063 systemd[1]: cri-containerd-b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e.scope: Deactivated successfully. Jul 10 00:23:27.831830 systemd[1]: cri-containerd-b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e.scope: Consumed 8.288s CPU time, 192.6M memory peak, 70.1M read from disk, 13.3M written to disk. Jul 10 00:23:27.837164 containerd[1565]: time="2025-07-10T00:23:27.837043682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" id:\"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" pid:3341 exited_at:{seconds:1752107007 nanos:836553103}" Jul 10 00:23:27.837446 containerd[1565]: time="2025-07-10T00:23:27.837193043Z" level=info msg="received exit event container_id:\"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" id:\"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" pid:3341 exited_at:{seconds:1752107007 nanos:836553103}" Jul 10 00:23:27.854360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72-rootfs.mount: Deactivated successfully. Jul 10 00:23:27.869149 containerd[1565]: time="2025-07-10T00:23:27.869018136Z" level=info msg="StopContainer for \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" returns successfully" Jul 10 00:23:27.870065 containerd[1565]: time="2025-07-10T00:23:27.869970356Z" level=info msg="StopPodSandbox for \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\"" Jul 10 00:23:27.870158 containerd[1565]: time="2025-07-10T00:23:27.870081347Z" level=info msg="Container to stop \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:27.887211 systemd[1]: cri-containerd-1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa.scope: Deactivated successfully. Jul 10 00:23:27.893814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e-rootfs.mount: Deactivated successfully. Jul 10 00:23:27.896043 containerd[1565]: time="2025-07-10T00:23:27.895893013Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" id:\"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" pid:2933 exit_status:137 exited_at:{seconds:1752107007 nanos:895583391}" Jul 10 00:23:27.899116 containerd[1565]: time="2025-07-10T00:23:27.899075299Z" level=info msg="StopContainer for \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" returns successfully" Jul 10 00:23:27.900142 containerd[1565]: time="2025-07-10T00:23:27.900100596Z" level=info msg="StopPodSandbox for \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\"" Jul 10 00:23:27.900483 containerd[1565]: time="2025-07-10T00:23:27.900357709Z" level=info msg="Container to stop \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:27.900624 containerd[1565]: time="2025-07-10T00:23:27.900592903Z" level=info msg="Container to stop \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:27.900898 containerd[1565]: time="2025-07-10T00:23:27.900870203Z" level=info msg="Container to stop \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:27.900952 containerd[1565]: time="2025-07-10T00:23:27.900903505Z" level=info msg="Container to stop \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:27.900952 containerd[1565]: time="2025-07-10T00:23:27.900921244Z" level=info msg="Container to stop \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:23:27.910432 systemd[1]: cri-containerd-7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a.scope: Deactivated successfully. Jul 10 00:23:27.953568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a-rootfs.mount: Deactivated successfully. Jul 10 00:23:27.957056 containerd[1565]: time="2025-07-10T00:23:27.957010445Z" level=info msg="shim disconnected" id=7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a namespace=k8s.io Jul 10 00:23:27.957330 containerd[1565]: time="2025-07-10T00:23:27.957192913Z" level=warning msg="cleaning up after shim disconnected" id=7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a namespace=k8s.io Jul 10 00:23:27.972610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa-rootfs.mount: Deactivated successfully. Jul 10 00:23:27.973199 containerd[1565]: time="2025-07-10T00:23:27.957211223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:23:27.977125 containerd[1565]: time="2025-07-10T00:23:27.977004571Z" level=info msg="shim disconnected" id=1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa namespace=k8s.io Jul 10 00:23:27.977125 containerd[1565]: time="2025-07-10T00:23:27.977058253Z" level=warning msg="cleaning up after shim disconnected" id=1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa namespace=k8s.io Jul 10 00:23:27.977125 containerd[1565]: time="2025-07-10T00:23:27.977067278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:23:28.011150 containerd[1565]: time="2025-07-10T00:23:28.011044594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" id:\"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" pid:2829 exit_status:137 exited_at:{seconds:1752107007 nanos:913338788}" Jul 10 00:23:28.011150 containerd[1565]: time="2025-07-10T00:23:28.011111288Z" level=info msg="received exit event sandbox_id:\"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" exit_status:137 exited_at:{seconds:1752107007 nanos:913338788}" Jul 10 00:23:28.014891 containerd[1565]: time="2025-07-10T00:23:28.014840776Z" level=info msg="received exit event sandbox_id:\"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" exit_status:137 exited_at:{seconds:1752107007 nanos:895583391}" Jul 10 00:23:28.015068 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa-shm.mount: Deactivated successfully. Jul 10 00:23:28.023127 containerd[1565]: time="2025-07-10T00:23:28.022252352Z" level=info msg="TearDown network for sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" successfully" Jul 10 00:23:28.023127 containerd[1565]: time="2025-07-10T00:23:28.022306041Z" level=info msg="StopPodSandbox for \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" returns successfully" Jul 10 00:23:28.024144 containerd[1565]: time="2025-07-10T00:23:28.023736836Z" level=info msg="TearDown network for sandbox \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" successfully" Jul 10 00:23:28.024144 containerd[1565]: time="2025-07-10T00:23:28.023776637Z" level=info msg="StopPodSandbox for \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" returns successfully" Jul 10 00:23:28.085763 kubelet[2700]: I0710 00:23:28.084627 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-cgroup\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.085763 kubelet[2700]: I0710 00:23:28.084737 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-host-proc-sys-net\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.085763 kubelet[2700]: I0710 00:23:28.084768 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfa4a743-f4f2-4b9a-809c-67302c3ed879-cilium-config-path\") pod \"cfa4a743-f4f2-4b9a-809c-67302c3ed879\" (UID: \"cfa4a743-f4f2-4b9a-809c-67302c3ed879\") " Jul 10 00:23:28.085763 kubelet[2700]: I0710 00:23:28.084794 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f48a3781-e279-44c8-b050-8f86c8042e5d-hubble-tls\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.085763 kubelet[2700]: I0710 00:23:28.084768 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.086455 kubelet[2700]: I0710 00:23:28.084854 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.086455 kubelet[2700]: I0710 00:23:28.084811 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-host-proc-sys-kernel\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086455 kubelet[2700]: I0710 00:23:28.084884 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.086455 kubelet[2700]: I0710 00:23:28.084911 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-etc-cni-netd\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086455 kubelet[2700]: I0710 00:23:28.084955 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-lib-modules\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086631 kubelet[2700]: I0710 00:23:28.084993 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-config-path\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086631 kubelet[2700]: I0710 00:23:28.085039 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5xck\" (UniqueName: \"kubernetes.io/projected/f48a3781-e279-44c8-b050-8f86c8042e5d-kube-api-access-j5xck\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086631 kubelet[2700]: I0710 00:23:28.085071 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f48a3781-e279-44c8-b050-8f86c8042e5d-clustermesh-secrets\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086631 kubelet[2700]: I0710 00:23:28.085113 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-xtables-lock\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086631 kubelet[2700]: I0710 00:23:28.085139 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-run\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086631 kubelet[2700]: I0710 00:23:28.085182 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-hostproc\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086833 kubelet[2700]: I0710 00:23:28.085212 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cni-path\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086833 kubelet[2700]: I0710 00:23:28.085235 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-bpf-maps\") pod \"f48a3781-e279-44c8-b050-8f86c8042e5d\" (UID: \"f48a3781-e279-44c8-b050-8f86c8042e5d\") " Jul 10 00:23:28.086833 kubelet[2700]: I0710 00:23:28.085275 2700 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvmzh\" (UniqueName: \"kubernetes.io/projected/cfa4a743-f4f2-4b9a-809c-67302c3ed879-kube-api-access-jvmzh\") pod \"cfa4a743-f4f2-4b9a-809c-67302c3ed879\" (UID: \"cfa4a743-f4f2-4b9a-809c-67302c3ed879\") " Jul 10 00:23:28.086833 kubelet[2700]: I0710 00:23:28.085361 2700 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-cgroup\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.086833 kubelet[2700]: I0710 00:23:28.085385 2700 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-host-proc-sys-kernel\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.088424 kubelet[2700]: I0710 00:23:28.088135 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.088424 kubelet[2700]: I0710 00:23:28.088222 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.097378 kubelet[2700]: I0710 00:23:28.097331 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.097586 kubelet[2700]: I0710 00:23:28.097569 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.097700 kubelet[2700]: I0710 00:23:28.097631 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-hostproc" (OuterVolumeSpecName: "hostproc") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.097785 kubelet[2700]: I0710 00:23:28.097774 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cni-path" (OuterVolumeSpecName: "cni-path") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.097862 kubelet[2700]: I0710 00:23:28.097852 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:23:28.098075 kubelet[2700]: I0710 00:23:28.098047 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa4a743-f4f2-4b9a-809c-67302c3ed879-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cfa4a743-f4f2-4b9a-809c-67302c3ed879" (UID: "cfa4a743-f4f2-4b9a-809c-67302c3ed879"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:23:28.098266 kubelet[2700]: I0710 00:23:28.098197 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:23:28.105066 kubelet[2700]: I0710 00:23:28.105003 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48a3781-e279-44c8-b050-8f86c8042e5d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:23:28.105499 kubelet[2700]: I0710 00:23:28.105300 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa4a743-f4f2-4b9a-809c-67302c3ed879-kube-api-access-jvmzh" (OuterVolumeSpecName: "kube-api-access-jvmzh") pod "cfa4a743-f4f2-4b9a-809c-67302c3ed879" (UID: "cfa4a743-f4f2-4b9a-809c-67302c3ed879"). InnerVolumeSpecName "kube-api-access-jvmzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:23:28.105865 kubelet[2700]: I0710 00:23:28.105821 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48a3781-e279-44c8-b050-8f86c8042e5d-kube-api-access-j5xck" (OuterVolumeSpecName: "kube-api-access-j5xck") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "kube-api-access-j5xck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:23:28.106513 kubelet[2700]: I0710 00:23:28.106483 2700 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48a3781-e279-44c8-b050-8f86c8042e5d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f48a3781-e279-44c8-b050-8f86c8042e5d" (UID: "f48a3781-e279-44c8-b050-8f86c8042e5d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:23:28.186097 kubelet[2700]: I0710 00:23:28.186025 2700 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cni-path\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186433 kubelet[2700]: I0710 00:23:28.186193 2700 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-bpf-maps\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186433 kubelet[2700]: I0710 00:23:28.186213 2700 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvmzh\" (UniqueName: \"kubernetes.io/projected/cfa4a743-f4f2-4b9a-809c-67302c3ed879-kube-api-access-jvmzh\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186871 kubelet[2700]: I0710 00:23:28.186582 2700 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-host-proc-sys-net\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186871 kubelet[2700]: I0710 00:23:28.186619 2700 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfa4a743-f4f2-4b9a-809c-67302c3ed879-cilium-config-path\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186871 kubelet[2700]: I0710 00:23:28.186676 2700 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f48a3781-e279-44c8-b050-8f86c8042e5d-hubble-tls\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186871 kubelet[2700]: I0710 00:23:28.186691 2700 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-etc-cni-netd\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186871 kubelet[2700]: I0710 00:23:28.186704 2700 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-lib-modules\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186871 kubelet[2700]: I0710 00:23:28.186740 2700 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-config-path\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186871 kubelet[2700]: I0710 00:23:28.186756 2700 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5xck\" (UniqueName: \"kubernetes.io/projected/f48a3781-e279-44c8-b050-8f86c8042e5d-kube-api-access-j5xck\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.186871 kubelet[2700]: I0710 00:23:28.186770 2700 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f48a3781-e279-44c8-b050-8f86c8042e5d-clustermesh-secrets\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.187282 kubelet[2700]: I0710 00:23:28.186784 2700 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-hostproc\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.187282 kubelet[2700]: I0710 00:23:28.186799 2700 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-xtables-lock\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.187282 kubelet[2700]: I0710 00:23:28.186839 2700 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f48a3781-e279-44c8-b050-8f86c8042e5d-cilium-run\") on node \"ci-4344.1.1-n-2654026dcf\" DevicePath \"\"" Jul 10 00:23:28.294762 kubelet[2700]: I0710 00:23:28.294274 2700 scope.go:117] "RemoveContainer" containerID="da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72" Jul 10 00:23:28.301144 containerd[1565]: time="2025-07-10T00:23:28.300949755Z" level=info msg="RemoveContainer for \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\"" Jul 10 00:23:28.316107 systemd[1]: Removed slice kubepods-besteffort-podcfa4a743_f4f2_4b9a_809c_67302c3ed879.slice - libcontainer container kubepods-besteffort-podcfa4a743_f4f2_4b9a_809c_67302c3ed879.slice. Jul 10 00:23:28.331607 systemd[1]: Removed slice kubepods-burstable-podf48a3781_e279_44c8_b050_8f86c8042e5d.slice - libcontainer container kubepods-burstable-podf48a3781_e279_44c8_b050_8f86c8042e5d.slice. Jul 10 00:23:28.331908 systemd[1]: kubepods-burstable-podf48a3781_e279_44c8_b050_8f86c8042e5d.slice: Consumed 8.421s CPU time, 192.9M memory peak, 70.3M read from disk, 13.3M written to disk. Jul 10 00:23:28.334520 containerd[1565]: time="2025-07-10T00:23:28.334325817Z" level=info msg="RemoveContainer for \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" returns successfully" Jul 10 00:23:28.335849 kubelet[2700]: I0710 00:23:28.334973 2700 scope.go:117] "RemoveContainer" containerID="da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72" Jul 10 00:23:28.345397 containerd[1565]: time="2025-07-10T00:23:28.336266824Z" level=error msg="ContainerStatus for \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\": not found" Jul 10 00:23:28.346116 kubelet[2700]: E0710 00:23:28.346051 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\": not found" containerID="da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72" Jul 10 00:23:28.348879 kubelet[2700]: I0710 00:23:28.347931 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72"} err="failed to get container status \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\": rpc error: code = NotFound desc = an error occurred when try to find container \"da7b2edc406b834b59aeb9087c70a6f6cadf214813a54dec1251065fbd214e72\": not found" Jul 10 00:23:28.348879 kubelet[2700]: I0710 00:23:28.348045 2700 scope.go:117] "RemoveContainer" containerID="b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e" Jul 10 00:23:28.355573 containerd[1565]: time="2025-07-10T00:23:28.355522242Z" level=info msg="RemoveContainer for \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\"" Jul 10 00:23:28.379680 containerd[1565]: time="2025-07-10T00:23:28.376323811Z" level=info msg="RemoveContainer for \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" returns successfully" Jul 10 00:23:28.380306 kubelet[2700]: I0710 00:23:28.380100 2700 scope.go:117] "RemoveContainer" containerID="be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517" Jul 10 00:23:28.385920 containerd[1565]: time="2025-07-10T00:23:28.385863677Z" level=info msg="RemoveContainer for \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\"" Jul 10 00:23:28.393854 containerd[1565]: time="2025-07-10T00:23:28.393631978Z" level=info msg="RemoveContainer for \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\" returns successfully" Jul 10 00:23:28.394971 kubelet[2700]: I0710 00:23:28.394922 2700 scope.go:117] "RemoveContainer" containerID="b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e" Jul 10 00:23:28.402401 containerd[1565]: time="2025-07-10T00:23:28.402350007Z" level=info msg="RemoveContainer for \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\"" Jul 10 00:23:28.412684 containerd[1565]: time="2025-07-10T00:23:28.412159232Z" level=info msg="RemoveContainer for \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\" returns successfully" Jul 10 00:23:28.414070 kubelet[2700]: I0710 00:23:28.413785 2700 scope.go:117] "RemoveContainer" containerID="8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1" Jul 10 00:23:28.417447 containerd[1565]: time="2025-07-10T00:23:28.417379830Z" level=info msg="RemoveContainer for \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\"" Jul 10 00:23:28.421499 containerd[1565]: time="2025-07-10T00:23:28.421421665Z" level=info msg="RemoveContainer for \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\" returns successfully" Jul 10 00:23:28.421843 kubelet[2700]: I0710 00:23:28.421803 2700 scope.go:117] "RemoveContainer" containerID="a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207" Jul 10 00:23:28.424282 containerd[1565]: time="2025-07-10T00:23:28.424235924Z" level=info msg="RemoveContainer for \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\"" Jul 10 00:23:28.428068 containerd[1565]: time="2025-07-10T00:23:28.428015515Z" level=info msg="RemoveContainer for \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\" returns successfully" Jul 10 00:23:28.428790 kubelet[2700]: I0710 00:23:28.428759 2700 scope.go:117] "RemoveContainer" containerID="b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e" Jul 10 00:23:28.429168 containerd[1565]: time="2025-07-10T00:23:28.429124132Z" level=error msg="ContainerStatus for \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\": not found" Jul 10 00:23:28.429700 kubelet[2700]: E0710 00:23:28.429634 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\": not found" containerID="b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e" Jul 10 00:23:28.429800 kubelet[2700]: I0710 00:23:28.429715 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e"} err="failed to get container status \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7c4535f57b6141e9a791bf0b06e218cb9cea9a7eeba98e60a8c616c4378841e\": not found" Jul 10 00:23:28.429800 kubelet[2700]: I0710 00:23:28.429752 2700 scope.go:117] "RemoveContainer" containerID="be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517" Jul 10 00:23:28.430324 containerd[1565]: time="2025-07-10T00:23:28.430174347Z" level=error msg="ContainerStatus for \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\": not found" Jul 10 00:23:28.430569 kubelet[2700]: E0710 00:23:28.430525 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\": not found" containerID="be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517" Jul 10 00:23:28.430636 kubelet[2700]: I0710 00:23:28.430576 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517"} err="failed to get container status \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\": rpc error: code = NotFound desc = an error occurred when try to find container \"be0283237bc1301f4269714c119f5fa872757a5af4835d72e92c11bfc1f7d517\": not found" Jul 10 00:23:28.430636 kubelet[2700]: I0710 00:23:28.430607 2700 scope.go:117] "RemoveContainer" containerID="b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e" Jul 10 00:23:28.431013 containerd[1565]: time="2025-07-10T00:23:28.430977009Z" level=error msg="ContainerStatus for \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\": not found" Jul 10 00:23:28.431155 kubelet[2700]: E0710 00:23:28.431128 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\": not found" containerID="b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e" Jul 10 00:23:28.431242 kubelet[2700]: I0710 00:23:28.431163 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e"} err="failed to get container status \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b588e7a7d334304445bf8db6cfc15adb8f20231da0d361beb8b05c8e0f4a359e\": not found" Jul 10 00:23:28.431242 kubelet[2700]: I0710 00:23:28.431189 2700 scope.go:117] "RemoveContainer" containerID="8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1" Jul 10 00:23:28.431783 containerd[1565]: time="2025-07-10T00:23:28.431532171Z" level=error msg="ContainerStatus for \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\": not found" Jul 10 00:23:28.432097 kubelet[2700]: E0710 00:23:28.431766 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\": not found" containerID="8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1" Jul 10 00:23:28.432097 kubelet[2700]: I0710 00:23:28.431799 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1"} err="failed to get container status \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f985625a9a3b49c676fcaeeb314019fc3b41dc3b323e7cf60540c69776865f1\": not found" Jul 10 00:23:28.432097 kubelet[2700]: I0710 00:23:28.431897 2700 scope.go:117] "RemoveContainer" containerID="a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207" Jul 10 00:23:28.432386 containerd[1565]: time="2025-07-10T00:23:28.432329807Z" level=error msg="ContainerStatus for \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\": not found" Jul 10 00:23:28.432540 kubelet[2700]: E0710 00:23:28.432513 2700 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\": not found" containerID="a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207" Jul 10 00:23:28.432617 kubelet[2700]: I0710 00:23:28.432547 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207"} err="failed to get container status \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\": rpc error: code = NotFound desc = an error occurred when try to find container \"a862348204f47d489f018559408527932b7799ea617e6c40f1056bb78160a207\": not found" Jul 10 00:23:28.849680 systemd[1]: var-lib-kubelet-pods-cfa4a743\x2df4f2\x2d4b9a\x2d809c\x2d67302c3ed879-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvmzh.mount: Deactivated successfully. Jul 10 00:23:28.849895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a-shm.mount: Deactivated successfully. Jul 10 00:23:28.850014 systemd[1]: var-lib-kubelet-pods-f48a3781\x2de279\x2d44c8\x2db050\x2d8f86c8042e5d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj5xck.mount: Deactivated successfully. Jul 10 00:23:28.850123 systemd[1]: var-lib-kubelet-pods-f48a3781\x2de279\x2d44c8\x2db050\x2d8f86c8042e5d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:23:28.850231 systemd[1]: var-lib-kubelet-pods-f48a3781\x2de279\x2d44c8\x2db050\x2d8f86c8042e5d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:23:29.651671 sshd[4285]: Connection closed by 147.75.109.163 port 45396 Jul 10 00:23:29.652213 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:29.662615 systemd[1]: sshd@24-143.110.236.9:22-147.75.109.163:45396.service: Deactivated successfully. Jul 10 00:23:29.665907 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:23:29.667153 systemd-logind[1541]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:23:29.672941 systemd[1]: Started sshd@25-143.110.236.9:22-147.75.109.163:45398.service - OpenSSH per-connection server daemon (147.75.109.163:45398). Jul 10 00:23:29.675461 systemd-logind[1541]: Removed session 25. Jul 10 00:23:29.762591 sshd[4434]: Accepted publickey for core from 147.75.109.163 port 45398 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:29.764695 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:29.770608 systemd-logind[1541]: New session 26 of user core. Jul 10 00:23:29.778965 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:23:29.865222 kubelet[2700]: I0710 00:23:29.865177 2700 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa4a743-f4f2-4b9a-809c-67302c3ed879" path="/var/lib/kubelet/pods/cfa4a743-f4f2-4b9a-809c-67302c3ed879/volumes" Jul 10 00:23:29.866202 kubelet[2700]: I0710 00:23:29.865785 2700 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48a3781-e279-44c8-b050-8f86c8042e5d" path="/var/lib/kubelet/pods/f48a3781-e279-44c8-b050-8f86c8042e5d/volumes" Jul 10 00:23:30.619768 sshd[4436]: Connection closed by 147.75.109.163 port 45398 Jul 10 00:23:30.622951 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:30.635704 systemd[1]: sshd@25-143.110.236.9:22-147.75.109.163:45398.service: Deactivated successfully. Jul 10 00:23:30.639472 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:23:30.643031 systemd-logind[1541]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:23:30.652335 systemd[1]: Started sshd@26-143.110.236.9:22-147.75.109.163:45408.service - OpenSSH per-connection server daemon (147.75.109.163:45408). Jul 10 00:23:30.656872 systemd-logind[1541]: Removed session 26. Jul 10 00:23:30.685685 kubelet[2700]: E0710 00:23:30.681472 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f48a3781-e279-44c8-b050-8f86c8042e5d" containerName="clean-cilium-state" Jul 10 00:23:30.685685 kubelet[2700]: E0710 00:23:30.683714 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f48a3781-e279-44c8-b050-8f86c8042e5d" containerName="cilium-agent" Jul 10 00:23:30.685685 kubelet[2700]: E0710 00:23:30.683754 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f48a3781-e279-44c8-b050-8f86c8042e5d" containerName="apply-sysctl-overwrites" Jul 10 00:23:30.685685 kubelet[2700]: E0710 00:23:30.683776 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f48a3781-e279-44c8-b050-8f86c8042e5d" containerName="mount-cgroup" Jul 10 00:23:30.685685 kubelet[2700]: E0710 00:23:30.683791 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f48a3781-e279-44c8-b050-8f86c8042e5d" containerName="mount-bpf-fs" Jul 10 00:23:30.685685 kubelet[2700]: E0710 00:23:30.683805 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfa4a743-f4f2-4b9a-809c-67302c3ed879" containerName="cilium-operator" Jul 10 00:23:30.685685 kubelet[2700]: I0710 00:23:30.683918 2700 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48a3781-e279-44c8-b050-8f86c8042e5d" containerName="cilium-agent" Jul 10 00:23:30.685685 kubelet[2700]: I0710 00:23:30.683939 2700 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa4a743-f4f2-4b9a-809c-67302c3ed879" containerName="cilium-operator" Jul 10 00:23:30.700165 systemd[1]: Created slice kubepods-burstable-poddef3d456_a33d_4389_b5ef_db4650c54d49.slice - libcontainer container kubepods-burstable-poddef3d456_a33d_4389_b5ef_db4650c54d49.slice. Jul 10 00:23:30.756343 sshd[4446]: Accepted publickey for core from 147.75.109.163 port 45408 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:30.757972 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:30.771621 systemd-logind[1541]: New session 27 of user core. Jul 10 00:23:30.775967 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:23:30.805210 kubelet[2700]: I0710 00:23:30.805143 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/def3d456-a33d-4389-b5ef-db4650c54d49-clustermesh-secrets\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805210 kubelet[2700]: I0710 00:23:30.805201 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/def3d456-a33d-4389-b5ef-db4650c54d49-cilium-ipsec-secrets\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805434 kubelet[2700]: I0710 00:23:30.805230 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7thhd\" (UniqueName: \"kubernetes.io/projected/def3d456-a33d-4389-b5ef-db4650c54d49-kube-api-access-7thhd\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805434 kubelet[2700]: I0710 00:23:30.805255 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-cilium-run\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805434 kubelet[2700]: I0710 00:23:30.805275 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-cni-path\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805434 kubelet[2700]: I0710 00:23:30.805299 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-cilium-cgroup\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805434 kubelet[2700]: I0710 00:23:30.805316 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-etc-cni-netd\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805434 kubelet[2700]: I0710 00:23:30.805344 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-xtables-lock\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805594 kubelet[2700]: I0710 00:23:30.805362 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/def3d456-a33d-4389-b5ef-db4650c54d49-hubble-tls\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805594 kubelet[2700]: I0710 00:23:30.805378 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-host-proc-sys-net\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805594 kubelet[2700]: I0710 00:23:30.805392 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-bpf-maps\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805594 kubelet[2700]: I0710 00:23:30.805432 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-hostproc\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805594 kubelet[2700]: I0710 00:23:30.805448 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-lib-modules\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.805594 kubelet[2700]: I0710 00:23:30.805463 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/def3d456-a33d-4389-b5ef-db4650c54d49-cilium-config-path\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.806327 kubelet[2700]: I0710 00:23:30.805479 2700 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/def3d456-a33d-4389-b5ef-db4650c54d49-host-proc-sys-kernel\") pod \"cilium-wsm92\" (UID: \"def3d456-a33d-4389-b5ef-db4650c54d49\") " pod="kube-system/cilium-wsm92" Jul 10 00:23:30.842892 sshd[4448]: Connection closed by 147.75.109.163 port 45408 Jul 10 00:23:30.842190 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:30.859894 systemd[1]: sshd@26-143.110.236.9:22-147.75.109.163:45408.service: Deactivated successfully. Jul 10 00:23:30.863394 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:23:30.865242 systemd-logind[1541]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:23:30.871974 systemd[1]: Started sshd@27-143.110.236.9:22-147.75.109.163:45414.service - OpenSSH per-connection server daemon (147.75.109.163:45414). Jul 10 00:23:30.874032 systemd-logind[1541]: Removed session 27. Jul 10 00:23:30.957592 kubelet[2700]: E0710 00:23:30.957215 2700 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:23:30.987350 sshd[4455]: Accepted publickey for core from 147.75.109.163 port 45414 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:23:30.990074 sshd-session[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:30.999753 systemd-logind[1541]: New session 28 of user core. Jul 10 00:23:31.005000 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 00:23:31.008962 kubelet[2700]: E0710 00:23:31.008796 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:31.012964 containerd[1565]: time="2025-07-10T00:23:31.012921936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wsm92,Uid:def3d456-a33d-4389-b5ef-db4650c54d49,Namespace:kube-system,Attempt:0,}" Jul 10 00:23:31.038584 containerd[1565]: time="2025-07-10T00:23:31.038075838Z" level=info msg="connecting to shim 37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019" address="unix:///run/containerd/s/20ca6f89810bf9eb30c4f907d75d80737348b961d16156bdd87870ee352174d1" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:23:31.072315 systemd[1]: Started cri-containerd-37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019.scope - libcontainer container 37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019. Jul 10 00:23:31.130258 containerd[1565]: time="2025-07-10T00:23:31.129544130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wsm92,Uid:def3d456-a33d-4389-b5ef-db4650c54d49,Namespace:kube-system,Attempt:0,} returns sandbox id \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\"" Jul 10 00:23:31.132781 kubelet[2700]: E0710 00:23:31.132603 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:31.142173 containerd[1565]: time="2025-07-10T00:23:31.142121047Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:23:31.180659 containerd[1565]: time="2025-07-10T00:23:31.180129599Z" level=info msg="Container 24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:31.187531 containerd[1565]: time="2025-07-10T00:23:31.187448806Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d\"" Jul 10 00:23:31.189017 containerd[1565]: time="2025-07-10T00:23:31.188956809Z" level=info msg="StartContainer for \"24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d\"" Jul 10 00:23:31.190396 containerd[1565]: time="2025-07-10T00:23:31.190302991Z" level=info msg="connecting to shim 24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d" address="unix:///run/containerd/s/20ca6f89810bf9eb30c4f907d75d80737348b961d16156bdd87870ee352174d1" protocol=ttrpc version=3 Jul 10 00:23:31.231070 systemd[1]: Started cri-containerd-24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d.scope - libcontainer container 24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d. Jul 10 00:23:31.277827 containerd[1565]: time="2025-07-10T00:23:31.277485372Z" level=info msg="StartContainer for \"24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d\" returns successfully" Jul 10 00:23:31.297734 systemd[1]: cri-containerd-24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d.scope: Deactivated successfully. Jul 10 00:23:31.299191 systemd[1]: cri-containerd-24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d.scope: Consumed 31ms CPU time, 9.7M memory peak, 3.2M read from disk. Jul 10 00:23:31.300570 containerd[1565]: time="2025-07-10T00:23:31.300449542Z" level=info msg="received exit event container_id:\"24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d\" id:\"24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d\" pid:4530 exited_at:{seconds:1752107011 nanos:299403359}" Jul 10 00:23:31.301560 containerd[1565]: time="2025-07-10T00:23:31.301067203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d\" id:\"24a8cea4316b36120cd5f06406f2c41d4c0c87e457a66573a37b5e3469949b1d\" pid:4530 exited_at:{seconds:1752107011 nanos:299403359}" Jul 10 00:23:31.334324 kubelet[2700]: E0710 00:23:31.333947 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:32.339673 kubelet[2700]: E0710 00:23:32.339590 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:32.342343 containerd[1565]: time="2025-07-10T00:23:32.342293182Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:23:32.356004 containerd[1565]: time="2025-07-10T00:23:32.355943578Z" level=info msg="Container dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:32.368839 containerd[1565]: time="2025-07-10T00:23:32.368767368Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d\"" Jul 10 00:23:32.370209 containerd[1565]: time="2025-07-10T00:23:32.369793928Z" level=info msg="StartContainer for \"dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d\"" Jul 10 00:23:32.371154 containerd[1565]: time="2025-07-10T00:23:32.371117379Z" level=info msg="connecting to shim dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d" address="unix:///run/containerd/s/20ca6f89810bf9eb30c4f907d75d80737348b961d16156bdd87870ee352174d1" protocol=ttrpc version=3 Jul 10 00:23:32.407921 systemd[1]: Started cri-containerd-dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d.scope - libcontainer container dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d. Jul 10 00:23:32.451332 containerd[1565]: time="2025-07-10T00:23:32.451275167Z" level=info msg="StartContainer for \"dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d\" returns successfully" Jul 10 00:23:32.463338 systemd[1]: cri-containerd-dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d.scope: Deactivated successfully. Jul 10 00:23:32.464327 systemd[1]: cri-containerd-dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d.scope: Consumed 28ms CPU time, 7.2M memory peak, 2M read from disk. Jul 10 00:23:32.465877 containerd[1565]: time="2025-07-10T00:23:32.465716403Z" level=info msg="received exit event container_id:\"dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d\" id:\"dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d\" pid:4573 exited_at:{seconds:1752107012 nanos:464944457}" Jul 10 00:23:32.466058 containerd[1565]: time="2025-07-10T00:23:32.466027309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d\" id:\"dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d\" pid:4573 exited_at:{seconds:1752107012 nanos:464944457}" Jul 10 00:23:32.497809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbe883f8bbee475dfc79f7e31e4dde8e821e777725975385ff01b33f706ce15d-rootfs.mount: Deactivated successfully. Jul 10 00:23:33.347425 kubelet[2700]: E0710 00:23:33.347378 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:33.350594 containerd[1565]: time="2025-07-10T00:23:33.350511004Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:23:33.373688 containerd[1565]: time="2025-07-10T00:23:33.370294282Z" level=info msg="Container 180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:33.382482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209485485.mount: Deactivated successfully. Jul 10 00:23:33.387142 containerd[1565]: time="2025-07-10T00:23:33.387051531Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670\"" Jul 10 00:23:33.388820 containerd[1565]: time="2025-07-10T00:23:33.388056409Z" level=info msg="StartContainer for \"180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670\"" Jul 10 00:23:33.392743 containerd[1565]: time="2025-07-10T00:23:33.392684232Z" level=info msg="connecting to shim 180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670" address="unix:///run/containerd/s/20ca6f89810bf9eb30c4f907d75d80737348b961d16156bdd87870ee352174d1" protocol=ttrpc version=3 Jul 10 00:23:33.422886 systemd[1]: Started cri-containerd-180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670.scope - libcontainer container 180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670. Jul 10 00:23:33.487741 containerd[1565]: time="2025-07-10T00:23:33.487683151Z" level=info msg="StartContainer for \"180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670\" returns successfully" Jul 10 00:23:33.490079 systemd[1]: cri-containerd-180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670.scope: Deactivated successfully. Jul 10 00:23:33.494588 containerd[1565]: time="2025-07-10T00:23:33.494513145Z" level=info msg="received exit event container_id:\"180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670\" id:\"180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670\" pid:4616 exited_at:{seconds:1752107013 nanos:493259337}" Jul 10 00:23:33.495360 containerd[1565]: time="2025-07-10T00:23:33.494768454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670\" id:\"180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670\" pid:4616 exited_at:{seconds:1752107013 nanos:493259337}" Jul 10 00:23:33.530176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-180224391a89ec65176186bc2e34928fe47a455aa4221490d445ceed467dc670-rootfs.mount: Deactivated successfully. Jul 10 00:23:34.355853 kubelet[2700]: E0710 00:23:34.355806 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:34.360006 containerd[1565]: time="2025-07-10T00:23:34.359947909Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:23:34.398117 containerd[1565]: time="2025-07-10T00:23:34.397937420Z" level=info msg="Container ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:34.410180 containerd[1565]: time="2025-07-10T00:23:34.409954125Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526\"" Jul 10 00:23:34.411168 containerd[1565]: time="2025-07-10T00:23:34.411061545Z" level=info msg="StartContainer for \"ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526\"" Jul 10 00:23:34.412790 containerd[1565]: time="2025-07-10T00:23:34.412724765Z" level=info msg="connecting to shim ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526" address="unix:///run/containerd/s/20ca6f89810bf9eb30c4f907d75d80737348b961d16156bdd87870ee352174d1" protocol=ttrpc version=3 Jul 10 00:23:34.443030 systemd[1]: Started cri-containerd-ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526.scope - libcontainer container ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526. Jul 10 00:23:34.489161 systemd[1]: cri-containerd-ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526.scope: Deactivated successfully. Jul 10 00:23:34.491347 containerd[1565]: time="2025-07-10T00:23:34.491021855Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddef3d456_a33d_4389_b5ef_db4650c54d49.slice/cri-containerd-ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526.scope/memory.events\": no such file or directory" Jul 10 00:23:34.493065 containerd[1565]: time="2025-07-10T00:23:34.492983353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526\" id:\"ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526\" pid:4658 exited_at:{seconds:1752107014 nanos:492072349}" Jul 10 00:23:34.494823 containerd[1565]: time="2025-07-10T00:23:34.494713866Z" level=info msg="received exit event container_id:\"ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526\" id:\"ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526\" pid:4658 exited_at:{seconds:1752107014 nanos:492072349}" Jul 10 00:23:34.497746 containerd[1565]: time="2025-07-10T00:23:34.497691219Z" level=info msg="StartContainer for \"ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526\" returns successfully" Jul 10 00:23:34.539260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea6c2ec6597a15dcf632cd6588d551ec3e250e8044ba2072cdbe41ad6fb15526-rootfs.mount: Deactivated successfully. Jul 10 00:23:35.364664 kubelet[2700]: E0710 00:23:35.364616 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:35.370224 containerd[1565]: time="2025-07-10T00:23:35.370175180Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:23:35.385949 containerd[1565]: time="2025-07-10T00:23:35.383889204Z" level=info msg="Container 0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:35.399732 containerd[1565]: time="2025-07-10T00:23:35.397318396Z" level=info msg="CreateContainer within sandbox \"37ca31526c632b23289b1f580279e19c594b077d7e301d05fd219dd2be538019\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\"" Jul 10 00:23:35.401894 containerd[1565]: time="2025-07-10T00:23:35.401827957Z" level=info msg="StartContainer for \"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\"" Jul 10 00:23:35.403448 containerd[1565]: time="2025-07-10T00:23:35.403009280Z" level=info msg="connecting to shim 0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea" address="unix:///run/containerd/s/20ca6f89810bf9eb30c4f907d75d80737348b961d16156bdd87870ee352174d1" protocol=ttrpc version=3 Jul 10 00:23:35.447066 systemd[1]: Started cri-containerd-0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea.scope - libcontainer container 0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea. Jul 10 00:23:35.498633 containerd[1565]: time="2025-07-10T00:23:35.498510044Z" level=info msg="StartContainer for \"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\" returns successfully" Jul 10 00:23:35.609709 containerd[1565]: time="2025-07-10T00:23:35.609631227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\" id:\"b63aedecc6bb9f9d3b4f60fd1f504833bb4b31024372a1619e2762667eb0fe5a\" pid:4731 exited_at:{seconds:1752107015 nanos:609252954}" Jul 10 00:23:36.040107 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 10 00:23:36.379899 kubelet[2700]: E0710 00:23:36.378569 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:37.382456 kubelet[2700]: E0710 00:23:37.381887 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:37.752463 containerd[1565]: time="2025-07-10T00:23:37.752270258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\" id:\"4d39d94eef89cef9f08ff0892109e96228d181e535759b55a4cdd63262e6d7da\" pid:4825 exit_status:1 exited_at:{seconds:1752107017 nanos:749813129}" Jul 10 00:23:37.862671 kubelet[2700]: E0710 00:23:37.862440 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:39.802228 systemd-networkd[1450]: lxc_health: Link UP Jul 10 00:23:39.834129 systemd-networkd[1450]: lxc_health: Gained carrier Jul 10 00:23:40.137070 containerd[1565]: time="2025-07-10T00:23:40.136987276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\" id:\"f704b3c55892a1496a0bb0d80a70c1672a9f730d08fd0efb78496eaa39d717de\" pid:5251 exit_status:1 exited_at:{seconds:1752107020 nanos:136004519}" Jul 10 00:23:41.010671 kubelet[2700]: E0710 00:23:41.010286 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:41.042453 kubelet[2700]: I0710 00:23:41.041207 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wsm92" podStartSLOduration=11.041181041 podStartE2EDuration="11.041181041s" podCreationTimestamp="2025-07-10 00:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:23:36.403736652 +0000 UTC m=+110.708103152" watchObservedRunningTime="2025-07-10 00:23:41.041181041 +0000 UTC m=+115.345547524" Jul 10 00:23:41.177597 systemd-networkd[1450]: lxc_health: Gained IPv6LL Jul 10 00:23:41.397659 kubelet[2700]: E0710 00:23:41.397346 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:42.351225 containerd[1565]: time="2025-07-10T00:23:42.351054067Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\" id:\"2c5f58b3b34de9f34f7671d56ebd2eeb3e0e384c011614d3985e57eedca2dccc\" pid:5289 exited_at:{seconds:1752107022 nanos:349559633}" Jul 10 00:23:42.401474 kubelet[2700]: E0710 00:23:42.400759 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:23:44.525465 containerd[1565]: time="2025-07-10T00:23:44.525416560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\" id:\"2c5372759274d12e8839384d1817276b3c5a8e88a657973de99345cc476bad26\" pid:5316 exited_at:{seconds:1752107024 nanos:524629562}" Jul 10 00:23:45.843852 containerd[1565]: time="2025-07-10T00:23:45.843753717Z" level=info msg="StopPodSandbox for \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\"" Jul 10 00:23:45.844891 containerd[1565]: time="2025-07-10T00:23:45.844852435Z" level=info msg="TearDown network for sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" successfully" Jul 10 00:23:45.844891 containerd[1565]: time="2025-07-10T00:23:45.844889976Z" level=info msg="StopPodSandbox for \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" returns successfully" Jul 10 00:23:45.845525 containerd[1565]: time="2025-07-10T00:23:45.845396741Z" level=info msg="RemovePodSandbox for \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\"" Jul 10 00:23:45.845525 containerd[1565]: time="2025-07-10T00:23:45.845429022Z" level=info msg="Forcibly stopping sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\"" Jul 10 00:23:45.845683 containerd[1565]: time="2025-07-10T00:23:45.845529930Z" level=info msg="TearDown network for sandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" successfully" Jul 10 00:23:45.846840 containerd[1565]: time="2025-07-10T00:23:45.846803076Z" level=info msg="Ensure that sandbox 7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a in task-service has been cleanup successfully" Jul 10 00:23:45.854778 containerd[1565]: time="2025-07-10T00:23:45.854707304Z" level=info msg="RemovePodSandbox \"7960090146e357861bc72d1ac095f31c7d45320e775663865e60e1704ce0720a\" returns successfully" Jul 10 00:23:45.856896 containerd[1565]: time="2025-07-10T00:23:45.856847269Z" level=info msg="StopPodSandbox for \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\"" Jul 10 00:23:45.857082 containerd[1565]: time="2025-07-10T00:23:45.857022098Z" level=info msg="TearDown network for sandbox \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" successfully" Jul 10 00:23:45.857082 containerd[1565]: time="2025-07-10T00:23:45.857037845Z" level=info msg="StopPodSandbox for \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" returns successfully" Jul 10 00:23:45.858607 containerd[1565]: time="2025-07-10T00:23:45.857731647Z" level=info msg="RemovePodSandbox for \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\"" Jul 10 00:23:45.858607 containerd[1565]: time="2025-07-10T00:23:45.857766308Z" level=info msg="Forcibly stopping sandbox \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\"" Jul 10 00:23:45.858607 containerd[1565]: time="2025-07-10T00:23:45.857908852Z" level=info msg="TearDown network for sandbox \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" successfully" Jul 10 00:23:45.859366 containerd[1565]: time="2025-07-10T00:23:45.859324910Z" level=info msg="Ensure that sandbox 1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa in task-service has been cleanup successfully" Jul 10 00:23:45.863291 containerd[1565]: time="2025-07-10T00:23:45.863238629Z" level=info msg="RemovePodSandbox \"1527186d17aa854f90b58c7e1c416b2f975c739307071c4bed9e389a51bac3aa\" returns successfully" Jul 10 00:23:46.689918 containerd[1565]: time="2025-07-10T00:23:46.689429775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0459bf372ed2f7a334a1e223d670d4896fafbf3a42cf84c50bf37744e362d5ea\" id:\"2eff1f28ba8d37a9a4668adbf3a26e9eec915add961ea2b4e42a73fece1a3b60\" pid:5348 exited_at:{seconds:1752107026 nanos:687854108}" Jul 10 00:23:46.701118 sshd[4463]: Connection closed by 147.75.109.163 port 45414 Jul 10 00:23:46.702289 sshd-session[4455]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:46.721284 systemd[1]: sshd@27-143.110.236.9:22-147.75.109.163:45414.service: Deactivated successfully. Jul 10 00:23:46.728989 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 00:23:46.731744 systemd-logind[1541]: Session 28 logged out. Waiting for processes to exit. Jul 10 00:23:46.733819 systemd-logind[1541]: Removed session 28.