Sep 13 00:51:11.825788 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:51:11.825814 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:51:11.825827 kernel: BIOS-provided physical RAM map: Sep 13 00:51:11.825834 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:51:11.825840 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:51:11.825846 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:51:11.825857 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 13 00:51:11.825867 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 13 00:51:11.825879 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:51:11.825889 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:51:11.825899 kernel: NX (Execute Disable) protection: active Sep 13 00:51:11.825910 kernel: SMBIOS 2.8 present. Sep 13 00:51:11.825919 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 13 00:51:11.825929 kernel: Hypervisor detected: KVM Sep 13 00:51:11.825942 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:51:11.825956 kernel: kvm-clock: cpu 0, msr 6519f001, primary cpu clock Sep 13 00:51:11.825966 kernel: kvm-clock: using sched offset of 3338682991 cycles Sep 13 00:51:11.825975 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:51:11.831066 kernel: tsc: Detected 2494.140 MHz processor Sep 13 00:51:11.831085 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:51:11.831095 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:51:11.831103 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 13 00:51:11.831110 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:51:11.831127 kernel: ACPI: Early table checksum verification disabled Sep 13 00:51:11.831134 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 13 00:51:11.831142 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:11.831150 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:11.831158 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:11.831165 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 13 00:51:11.831173 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:11.831180 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:11.831188 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:11.831198 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:11.831206 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 13 00:51:11.831213 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 13 00:51:11.831221 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 13 00:51:11.831228 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 13 00:51:11.831236 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 13 00:51:11.831243 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 13 00:51:11.831251 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 13 00:51:11.831265 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:51:11.831273 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:51:11.831281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:51:11.831289 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:51:11.831298 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 13 00:51:11.831306 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 13 00:51:11.831316 kernel: Zone ranges: Sep 13 00:51:11.831324 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:51:11.831332 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 13 00:51:11.831340 kernel: Normal empty Sep 13 00:51:11.831348 kernel: Movable zone start for each node Sep 13 00:51:11.831356 kernel: Early memory node ranges Sep 13 00:51:11.831364 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:51:11.831372 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 13 00:51:11.831380 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 13 00:51:11.831390 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:51:11.831402 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:51:11.831410 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 13 00:51:11.831418 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:51:11.831426 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:51:11.831434 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:51:11.831441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:51:11.831449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:51:11.831457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:51:11.831468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:51:11.831478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:51:11.831487 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:51:11.831495 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:51:11.831503 kernel: TSC deadline timer available Sep 13 00:51:11.831511 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:51:11.831519 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 13 00:51:11.831527 kernel: Booting paravirtualized kernel on KVM Sep 13 00:51:11.831535 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:51:11.831546 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:51:11.831554 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:51:11.831561 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:51:11.831569 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:51:11.831577 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Sep 13 00:51:11.831585 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:51:11.831593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 13 00:51:11.831601 kernel: Policy zone: DMA32 Sep 13 00:51:11.831610 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:51:11.831622 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:51:11.831629 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:51:11.831638 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:51:11.831645 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:51:11.831654 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 123076K reserved, 0K cma-reserved) Sep 13 00:51:11.831662 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:51:11.831670 kernel: Kernel/User page tables isolation: enabled Sep 13 00:51:11.831678 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:51:11.831689 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:51:11.831697 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:51:11.831706 kernel: rcu: RCU event tracing is enabled. Sep 13 00:51:11.831714 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:51:11.831722 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:51:11.831730 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:51:11.831739 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:51:11.831747 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:51:11.831755 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:51:11.831765 kernel: random: crng init done Sep 13 00:51:11.831773 kernel: Console: colour VGA+ 80x25 Sep 13 00:51:11.831781 kernel: printk: console [tty0] enabled Sep 13 00:51:11.831789 kernel: printk: console [ttyS0] enabled Sep 13 00:51:11.831797 kernel: ACPI: Core revision 20210730 Sep 13 00:51:11.831805 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:51:11.831813 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:51:11.831821 kernel: x2apic enabled Sep 13 00:51:11.831829 kernel: Switched APIC routing to physical x2apic. Sep 13 00:51:11.831837 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:51:11.831848 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 13 00:51:11.831856 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 13 00:51:11.831868 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:51:11.831876 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:51:11.831884 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:51:11.831892 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:51:11.831900 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:51:11.831908 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 00:51:11.831919 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:51:11.831936 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:51:11.831944 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:51:11.831955 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:51:11.831963 kernel: active return thunk: its_return_thunk Sep 13 00:51:11.831972 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:51:11.831993 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:51:11.832002 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:51:11.832010 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:51:11.832019 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:51:11.832031 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:51:11.832040 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:51:11.832049 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:51:11.832057 kernel: LSM: Security Framework initializing Sep 13 00:51:11.832065 kernel: SELinux: Initializing. Sep 13 00:51:11.832074 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:51:11.832082 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:51:11.832093 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 13 00:51:11.832102 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 13 00:51:11.832110 kernel: signal: max sigframe size: 1776 Sep 13 00:51:11.832119 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:51:11.832128 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:51:11.832136 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:51:11.832144 kernel: x86: Booting SMP configuration: Sep 13 00:51:11.832153 kernel: .... node #0, CPUs: #1 Sep 13 00:51:11.832162 kernel: kvm-clock: cpu 1, msr 6519f041, secondary cpu clock Sep 13 00:51:11.832173 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Sep 13 00:51:11.832181 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:51:11.832190 kernel: smpboot: Max logical packages: 1 Sep 13 00:51:11.832198 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 13 00:51:11.832207 kernel: devtmpfs: initialized Sep 13 00:51:11.832215 kernel: x86/mm: Memory block size: 128MB Sep 13 00:51:11.832223 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:51:11.832232 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:51:11.832240 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:51:11.832251 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:51:11.832260 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:51:11.832268 kernel: audit: type=2000 audit(1757724671.459:1): state=initialized audit_enabled=0 res=1 Sep 13 00:51:11.832277 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:51:11.832285 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:51:11.832293 kernel: cpuidle: using governor menu Sep 13 00:51:11.832305 kernel: ACPI: bus type PCI registered Sep 13 00:51:11.832314 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:51:11.832322 kernel: dca service started, version 1.12.1 Sep 13 00:51:11.832333 kernel: PCI: Using configuration type 1 for base access Sep 13 00:51:11.832341 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:51:11.832350 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:51:11.832358 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:51:11.832366 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:51:11.832379 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:51:11.832394 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:51:11.832405 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:51:11.832417 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:51:11.832431 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:51:11.832444 kernel: ACPI: Interpreter enabled Sep 13 00:51:11.832455 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:51:11.832463 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:51:11.832472 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:51:11.832481 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:51:11.832489 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:51:11.832690 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:51:11.836035 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:51:11.836071 kernel: acpiphp: Slot [3] registered Sep 13 00:51:11.836082 kernel: acpiphp: Slot [4] registered Sep 13 00:51:11.836091 kernel: acpiphp: Slot [5] registered Sep 13 00:51:11.836099 kernel: acpiphp: Slot [6] registered Sep 13 00:51:11.836108 kernel: acpiphp: Slot [7] registered Sep 13 00:51:11.836116 kernel: acpiphp: Slot [8] registered Sep 13 00:51:11.836125 kernel: acpiphp: Slot [9] registered Sep 13 00:51:11.836133 kernel: acpiphp: Slot [10] registered Sep 13 00:51:11.836147 kernel: acpiphp: Slot [11] registered Sep 13 00:51:11.836155 kernel: acpiphp: Slot [12] registered Sep 13 00:51:11.836164 kernel: acpiphp: Slot [13] registered Sep 13 00:51:11.836172 kernel: acpiphp: Slot [14] registered Sep 13 00:51:11.836181 kernel: acpiphp: Slot [15] registered Sep 13 00:51:11.836189 kernel: acpiphp: Slot [16] registered Sep 13 00:51:11.836197 kernel: acpiphp: Slot [17] registered Sep 13 00:51:11.836206 kernel: acpiphp: Slot [18] registered Sep 13 00:51:11.836214 kernel: acpiphp: Slot [19] registered Sep 13 00:51:11.836225 kernel: acpiphp: Slot [20] registered Sep 13 00:51:11.836234 kernel: acpiphp: Slot [21] registered Sep 13 00:51:11.836242 kernel: acpiphp: Slot [22] registered Sep 13 00:51:11.836251 kernel: acpiphp: Slot [23] registered Sep 13 00:51:11.836259 kernel: acpiphp: Slot [24] registered Sep 13 00:51:11.836267 kernel: acpiphp: Slot [25] registered Sep 13 00:51:11.836276 kernel: acpiphp: Slot [26] registered Sep 13 00:51:11.836284 kernel: acpiphp: Slot [27] registered Sep 13 00:51:11.836292 kernel: acpiphp: Slot [28] registered Sep 13 00:51:11.836300 kernel: acpiphp: Slot [29] registered Sep 13 00:51:11.836311 kernel: acpiphp: Slot [30] registered Sep 13 00:51:11.836320 kernel: acpiphp: Slot [31] registered Sep 13 00:51:11.836328 kernel: PCI host bridge to bus 0000:00 Sep 13 00:51:11.836519 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:51:11.836672 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:51:11.836762 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:51:11.836844 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:51:11.836932 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 13 00:51:11.837035 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:51:11.837153 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:51:11.837258 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:51:11.837365 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 13 00:51:11.837484 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 13 00:51:11.837583 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:51:11.837688 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:51:11.837781 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:51:11.837871 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:51:11.838040 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 13 00:51:11.838154 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 13 00:51:11.838283 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:51:11.838383 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 13 00:51:11.838484 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 13 00:51:11.838648 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:51:11.838795 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 13 00:51:11.838952 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 13 00:51:11.839111 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 13 00:51:11.839223 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 13 00:51:11.839323 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:51:11.839450 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:51:11.839558 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 13 00:51:11.839675 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 13 00:51:11.839818 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 13 00:51:11.840010 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:51:11.840179 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 13 00:51:11.840335 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 13 00:51:11.840489 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 13 00:51:11.840658 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 13 00:51:11.840802 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 13 00:51:11.840901 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 13 00:51:11.841064 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 13 00:51:11.841169 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:51:11.841263 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:51:11.841352 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 13 00:51:11.841455 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 13 00:51:11.841567 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:51:11.841705 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 13 00:51:11.841811 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 13 00:51:11.841900 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 13 00:51:11.842020 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 13 00:51:11.842135 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 13 00:51:11.842228 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 13 00:51:11.842239 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:51:11.842249 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:51:11.842258 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:51:11.842270 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:51:11.842279 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:51:11.842288 kernel: iommu: Default domain type: Translated Sep 13 00:51:11.842296 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:51:11.842391 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 13 00:51:11.842499 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:51:11.842588 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 13 00:51:11.842604 kernel: vgaarb: loaded Sep 13 00:51:11.842617 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:51:11.842634 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:51:11.842647 kernel: PTP clock support registered Sep 13 00:51:11.842659 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:51:11.842670 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:51:11.842681 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:51:11.842693 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 13 00:51:11.842706 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:51:11.842718 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:51:11.842731 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:51:11.842750 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:51:11.842762 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:51:11.842775 kernel: pnp: PnP ACPI init Sep 13 00:51:11.842789 kernel: pnp: PnP ACPI: found 4 devices Sep 13 00:51:11.842803 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:51:11.842818 kernel: NET: Registered PF_INET protocol family Sep 13 00:51:11.842833 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:51:11.842846 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:51:11.842864 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:51:11.842879 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:51:11.842894 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:51:11.842908 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:51:11.842923 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:51:11.842938 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:51:11.842952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:51:11.842965 kernel: NET: Registered PF_XDP protocol family Sep 13 00:51:11.852264 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:51:11.852421 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:51:11.852525 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:51:11.852608 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:51:11.852718 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 13 00:51:11.852822 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 13 00:51:11.852917 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:51:11.853033 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 13 00:51:11.853047 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:51:11.853145 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 30389 usecs Sep 13 00:51:11.853157 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:51:11.853166 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:51:11.853175 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 13 00:51:11.853184 kernel: Initialise system trusted keyrings Sep 13 00:51:11.853193 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:51:11.853201 kernel: Key type asymmetric registered Sep 13 00:51:11.853210 kernel: Asymmetric key parser 'x509' registered Sep 13 00:51:11.853219 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:51:11.853230 kernel: io scheduler mq-deadline registered Sep 13 00:51:11.853239 kernel: io scheduler kyber registered Sep 13 00:51:11.853248 kernel: io scheduler bfq registered Sep 13 00:51:11.853256 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:51:11.853265 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 13 00:51:11.853274 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:51:11.853283 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:51:11.853291 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:51:11.853300 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:51:11.853311 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:51:11.853320 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:51:11.853328 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:51:11.853336 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:51:11.853456 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:51:11.853551 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:51:11.853742 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:51:11 UTC (1757724671) Sep 13 00:51:11.853830 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:51:11.853846 kernel: intel_pstate: CPU model not supported Sep 13 00:51:11.853855 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:51:11.853864 kernel: Segment Routing with IPv6 Sep 13 00:51:11.853873 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:51:11.853882 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:51:11.853890 kernel: Key type dns_resolver registered Sep 13 00:51:11.853899 kernel: IPI shorthand broadcast: enabled Sep 13 00:51:11.853907 kernel: sched_clock: Marking stable (631463950, 79297240)->(815674011, -104912821) Sep 13 00:51:11.853916 kernel: registered taskstats version 1 Sep 13 00:51:11.853927 kernel: Loading compiled-in X.509 certificates Sep 13 00:51:11.853936 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:51:11.853944 kernel: Key type .fscrypt registered Sep 13 00:51:11.853953 kernel: Key type fscrypt-provisioning registered Sep 13 00:51:11.853961 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:51:11.853970 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:51:11.853979 kernel: ima: No architecture policies found Sep 13 00:51:11.854008 kernel: clk: Disabling unused clocks Sep 13 00:51:11.854021 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:51:11.854032 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:51:11.854042 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:51:11.854054 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:51:11.854066 kernel: Run /init as init process Sep 13 00:51:11.854075 kernel: with arguments: Sep 13 00:51:11.854103 kernel: /init Sep 13 00:51:11.854114 kernel: with environment: Sep 13 00:51:11.854123 kernel: HOME=/ Sep 13 00:51:11.854131 kernel: TERM=linux Sep 13 00:51:11.854142 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:51:11.854155 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:51:11.854167 systemd[1]: Detected virtualization kvm. Sep 13 00:51:11.854177 systemd[1]: Detected architecture x86-64. Sep 13 00:51:11.854186 systemd[1]: Running in initrd. Sep 13 00:51:11.854196 systemd[1]: No hostname configured, using default hostname. Sep 13 00:51:11.854205 systemd[1]: Hostname set to . Sep 13 00:51:11.854217 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:51:11.854226 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:51:11.854236 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:51:11.854245 systemd[1]: Reached target cryptsetup.target. Sep 13 00:51:11.854254 systemd[1]: Reached target paths.target. Sep 13 00:51:11.854263 systemd[1]: Reached target slices.target. Sep 13 00:51:11.854273 systemd[1]: Reached target swap.target. Sep 13 00:51:11.854282 systemd[1]: Reached target timers.target. Sep 13 00:51:11.854294 systemd[1]: Listening on iscsid.socket. Sep 13 00:51:11.854303 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:51:11.854313 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:51:11.854322 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:51:11.854331 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:51:11.854341 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:51:11.854350 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:51:11.854361 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:51:11.854378 systemd[1]: Reached target sockets.target. Sep 13 00:51:11.854389 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:51:11.854407 systemd[1]: Finished network-cleanup.service. Sep 13 00:51:11.854419 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:51:11.854429 systemd[1]: Starting systemd-journald.service... Sep 13 00:51:11.854438 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:51:11.854451 systemd[1]: Starting systemd-resolved.service... Sep 13 00:51:11.854460 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:51:11.854469 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:51:11.854479 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:51:11.854489 kernel: audit: type=1130 audit(1757724671.830:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.854498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:51:11.854512 systemd-journald[184]: Journal started Sep 13 00:51:11.854575 systemd-journald[184]: Runtime Journal (/run/log/journal/118dd015443d4988a9e8f97e57f5d38f) is 4.9M, max 39.5M, 34.5M free. Sep 13 00:51:11.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.853037 systemd-modules-load[185]: Inserted module 'overlay' Sep 13 00:51:11.858253 systemd-resolved[186]: Positive Trust Anchors: Sep 13 00:51:11.878160 systemd[1]: Started systemd-journald.service. Sep 13 00:51:11.858263 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:51:11.882369 kernel: audit: type=1130 audit(1757724671.877:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.858306 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:51:11.861188 systemd-resolved[186]: Defaulting to hostname 'linux'. Sep 13 00:51:11.878546 systemd[1]: Started systemd-resolved.service. Sep 13 00:51:11.887787 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:51:11.904297 kernel: audit: type=1130 audit(1757724671.886:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.904327 kernel: audit: type=1130 audit(1757724671.890:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.904340 kernel: audit: type=1130 audit(1757724671.893:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.904351 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:51:11.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.890593 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:51:11.894794 systemd[1]: Reached target nss-lookup.target. Sep 13 00:51:11.898433 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:51:11.912427 kernel: Bridge firewalling registered Sep 13 00:51:11.912019 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 13 00:51:11.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.923716 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:51:11.928329 kernel: audit: type=1130 audit(1757724671.923:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.927965 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:51:11.936004 kernel: SCSI subsystem initialized Sep 13 00:51:11.941274 dracut-cmdline[202]: dracut-dracut-053 Sep 13 00:51:11.944910 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:51:11.959117 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:51:11.959180 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:51:11.959195 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:51:11.962695 systemd-modules-load[185]: Inserted module 'dm_multipath' Sep 13 00:51:11.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.963963 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:51:11.965130 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:51:11.969109 kernel: audit: type=1130 audit(1757724671.963:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.972480 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:51:11.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:11.976012 kernel: audit: type=1130 audit(1757724671.971:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:12.016023 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:51:12.035004 kernel: iscsi: registered transport (tcp) Sep 13 00:51:12.060017 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:51:12.060099 kernel: QLogic iSCSI HBA Driver Sep 13 00:51:12.099342 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:51:12.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:12.100734 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:51:12.104104 kernel: audit: type=1130 audit(1757724672.098:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:12.155082 kernel: raid6: avx2x4 gen() 17654 MB/s Sep 13 00:51:12.172047 kernel: raid6: avx2x4 xor() 9345 MB/s Sep 13 00:51:12.189043 kernel: raid6: avx2x2 gen() 17484 MB/s Sep 13 00:51:12.206040 kernel: raid6: avx2x2 xor() 20441 MB/s Sep 13 00:51:12.223038 kernel: raid6: avx2x1 gen() 13437 MB/s Sep 13 00:51:12.240043 kernel: raid6: avx2x1 xor() 17804 MB/s Sep 13 00:51:12.257059 kernel: raid6: sse2x4 gen() 11862 MB/s Sep 13 00:51:12.274067 kernel: raid6: sse2x4 xor() 6961 MB/s Sep 13 00:51:12.291040 kernel: raid6: sse2x2 gen() 13239 MB/s Sep 13 00:51:12.308044 kernel: raid6: sse2x2 xor() 8567 MB/s Sep 13 00:51:12.325045 kernel: raid6: sse2x1 gen() 12322 MB/s Sep 13 00:51:12.342134 kernel: raid6: sse2x1 xor() 6032 MB/s Sep 13 00:51:12.342203 kernel: raid6: using algorithm avx2x4 gen() 17654 MB/s Sep 13 00:51:12.342215 kernel: raid6: .... xor() 9345 MB/s, rmw enabled Sep 13 00:51:12.343160 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:51:12.356021 kernel: xor: automatically using best checksumming function avx Sep 13 00:51:12.460043 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:51:12.475614 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:51:12.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:12.476000 audit: BPF prog-id=7 op=LOAD Sep 13 00:51:12.476000 audit: BPF prog-id=8 op=LOAD Sep 13 00:51:12.478142 systemd[1]: Starting systemd-udevd.service... Sep 13 00:51:12.492586 systemd-udevd[385]: Using default interface naming scheme 'v252'. Sep 13 00:51:12.499218 systemd[1]: Started systemd-udevd.service. Sep 13 00:51:12.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:12.504218 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:51:12.522370 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Sep 13 00:51:12.565803 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:51:12.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:12.567193 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:51:12.625217 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:51:12.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:12.675663 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 00:51:12.759065 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:51:12.759100 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:51:12.759309 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:51:12.759330 kernel: GPT:9289727 != 125829119 Sep 13 00:51:12.759346 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:51:12.759362 kernel: GPT:9289727 != 125829119 Sep 13 00:51:12.759379 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:51:12.759397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:51:12.759416 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:51:12.759441 kernel: AES CTR mode by8 optimization enabled Sep 13 00:51:12.768005 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 13 00:51:12.794265 kernel: libata version 3.00 loaded. Sep 13 00:51:12.794284 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 13 00:51:12.822937 kernel: ACPI: bus type USB registered Sep 13 00:51:12.822964 kernel: usbcore: registered new interface driver usbfs Sep 13 00:51:12.822994 kernel: usbcore: registered new interface driver hub Sep 13 00:51:12.823012 kernel: usbcore: registered new device driver usb Sep 13 00:51:12.823038 kernel: scsi host1: ata_piix Sep 13 00:51:12.823231 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 13 00:51:12.823250 kernel: ehci-pci: EHCI PCI platform driver Sep 13 00:51:12.823268 kernel: scsi host2: ata_piix Sep 13 00:51:12.823418 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 13 00:51:12.823437 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 13 00:51:12.826008 kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 13 00:51:12.833014 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (438) Sep 13 00:51:12.837875 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:51:12.886011 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 13 00:51:12.886267 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 13 00:51:12.886432 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 13 00:51:12.886588 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Sep 13 00:51:12.886750 kernel: hub 1-0:1.0: USB hub found Sep 13 00:51:12.886965 kernel: hub 1-0:1.0: 2 ports detected Sep 13 00:51:12.888998 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:51:12.892266 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:51:12.892707 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:51:12.896596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:51:12.897850 systemd[1]: Starting disk-uuid.service... Sep 13 00:51:12.904936 disk-uuid[506]: Primary Header is updated. Sep 13 00:51:12.904936 disk-uuid[506]: Secondary Entries is updated. Sep 13 00:51:12.904936 disk-uuid[506]: Secondary Header is updated. Sep 13 00:51:12.913011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:51:12.918027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:51:13.923025 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:51:13.923421 disk-uuid[507]: The operation has completed successfully. Sep 13 00:51:13.970136 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:51:13.970960 systemd[1]: Finished disk-uuid.service. Sep 13 00:51:13.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:13.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:13.973237 systemd[1]: Starting verity-setup.service... Sep 13 00:51:13.992030 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:51:14.040105 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:51:14.041642 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:51:14.044158 systemd[1]: Finished verity-setup.service. Sep 13 00:51:14.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.130014 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:51:14.130943 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:51:14.131489 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:51:14.132549 systemd[1]: Starting ignition-setup.service... Sep 13 00:51:14.133917 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:51:14.150460 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:51:14.150535 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:51:14.150552 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:51:14.171883 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:51:14.179101 systemd[1]: Finished ignition-setup.service. Sep 13 00:51:14.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.180383 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:51:14.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.276000 audit: BPF prog-id=9 op=LOAD Sep 13 00:51:14.275760 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:51:14.277811 systemd[1]: Starting systemd-networkd.service... Sep 13 00:51:14.305345 systemd-networkd[690]: lo: Link UP Sep 13 00:51:14.305355 systemd-networkd[690]: lo: Gained carrier Sep 13 00:51:14.306585 systemd-networkd[690]: Enumeration completed Sep 13 00:51:14.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.307115 systemd[1]: Started systemd-networkd.service. Sep 13 00:51:14.307415 systemd-networkd[690]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:51:14.307566 systemd[1]: Reached target network.target. Sep 13 00:51:14.308554 systemd-networkd[690]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 13 00:51:14.309916 systemd[1]: Starting iscsiuio.service... Sep 13 00:51:14.310433 systemd-networkd[690]: eth1: Link UP Sep 13 00:51:14.310440 systemd-networkd[690]: eth1: Gained carrier Sep 13 00:51:14.316522 systemd-networkd[690]: eth0: Link UP Sep 13 00:51:14.316531 systemd-networkd[690]: eth0: Gained carrier Sep 13 00:51:14.318080 ignition[618]: Ignition 2.14.0 Sep 13 00:51:14.318091 ignition[618]: Stage: fetch-offline Sep 13 00:51:14.318172 ignition[618]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:51:14.318209 ignition[618]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:51:14.325110 ignition[618]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:51:14.326213 ignition[618]: parsed url from cmdline: "" Sep 13 00:51:14.326294 ignition[618]: no config URL provided Sep 13 00:51:14.326883 ignition[618]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:51:14.327547 ignition[618]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:51:14.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.333616 systemd[1]: Started iscsiuio.service. Sep 13 00:51:14.331068 ignition[618]: failed to fetch config: resource requires networking Sep 13 00:51:14.335017 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:51:14.331301 ignition[618]: Ignition finished successfully Sep 13 00:51:14.356289 iscsid[696]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:51:14.356289 iscsid[696]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 00:51:14.356289 iscsid[696]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:51:14.356289 iscsid[696]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:51:14.356289 iscsid[696]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:51:14.356289 iscsid[696]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:51:14.356289 iscsid[696]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:51:14.336601 systemd[1]: Starting ignition-fetch.service... Sep 13 00:51:14.351078 ignition[695]: Ignition 2.14.0 Sep 13 00:51:14.337908 systemd[1]: Starting iscsid.service... Sep 13 00:51:14.351085 ignition[695]: Stage: fetch Sep 13 00:51:14.342141 systemd-networkd[690]: eth1: DHCPv4 address 10.124.0.18/20 acquired from 169.254.169.253 Sep 13 00:51:14.351252 ignition[695]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:51:14.344204 systemd-networkd[690]: eth0: DHCPv4 address 143.110.227.187/20, gateway 143.110.224.1 acquired from 169.254.169.253 Sep 13 00:51:14.351290 ignition[695]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:51:14.346036 systemd[1]: Started iscsid.service. Sep 13 00:51:14.354098 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:51:14.347800 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:51:14.354261 ignition[695]: parsed url from cmdline: "" Sep 13 00:51:14.354265 ignition[695]: no config URL provided Sep 13 00:51:14.354271 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:51:14.354281 ignition[695]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:51:14.354313 ignition[695]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 13 00:51:14.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.366504 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:51:14.367566 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:51:14.368787 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:51:14.369937 systemd[1]: Reached target remote-fs.target. Sep 13 00:51:14.371169 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:51:14.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.382052 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:51:14.385015 ignition[695]: GET result: OK Sep 13 00:51:14.385176 ignition[695]: parsing config with SHA512: 1f4656c9befc169e88d2dbbd6cd4d488891b8d4ea84ddef67617890081130382fc1c25e994666195b0d7736853fe566780d7ee37e477c8c183a3ea6b0ac509d4 Sep 13 00:51:14.397096 unknown[695]: fetched base config from "system" Sep 13 00:51:14.397116 unknown[695]: fetched base config from "system" Sep 13 00:51:14.397128 unknown[695]: fetched user config from "digitalocean" Sep 13 00:51:14.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.398112 ignition[695]: fetch: fetch complete Sep 13 00:51:14.399722 systemd[1]: Finished ignition-fetch.service. Sep 13 00:51:14.398121 ignition[695]: fetch: fetch passed Sep 13 00:51:14.400980 systemd[1]: Starting ignition-kargs.service... Sep 13 00:51:14.398198 ignition[695]: Ignition finished successfully Sep 13 00:51:14.413773 ignition[715]: Ignition 2.14.0 Sep 13 00:51:14.413788 ignition[715]: Stage: kargs Sep 13 00:51:14.413975 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:51:14.414016 ignition[715]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:51:14.416268 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:51:14.418753 ignition[715]: kargs: kargs passed Sep 13 00:51:14.418831 ignition[715]: Ignition finished successfully Sep 13 00:51:14.419875 systemd[1]: Finished ignition-kargs.service. Sep 13 00:51:14.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.421349 systemd[1]: Starting ignition-disks.service... Sep 13 00:51:14.433696 ignition[721]: Ignition 2.14.0 Sep 13 00:51:14.433707 ignition[721]: Stage: disks Sep 13 00:51:14.433861 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:51:14.433893 ignition[721]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:51:14.436179 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:51:14.438128 ignition[721]: disks: disks passed Sep 13 00:51:14.438189 ignition[721]: Ignition finished successfully Sep 13 00:51:14.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.439043 systemd[1]: Finished ignition-disks.service. Sep 13 00:51:14.439899 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:51:14.440321 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:51:14.440899 systemd[1]: Reached target local-fs.target. Sep 13 00:51:14.441505 systemd[1]: Reached target sysinit.target. Sep 13 00:51:14.442086 systemd[1]: Reached target basic.target. Sep 13 00:51:14.443690 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:51:14.461330 systemd-fsck[729]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:51:14.464527 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:51:14.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.465936 systemd[1]: Mounting sysroot.mount... Sep 13 00:51:14.477108 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:51:14.476367 systemd[1]: Mounted sysroot.mount. Sep 13 00:51:14.476902 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:51:14.479340 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:51:14.480425 systemd[1]: Starting flatcar-digitalocean-network.service... Sep 13 00:51:14.482135 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 00:51:14.482549 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:51:14.482586 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:51:14.488941 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:51:14.494794 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:51:14.500902 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:51:14.510838 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:51:14.520898 initrd-setup-root[759]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:51:14.528123 initrd-setup-root[767]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:51:14.606156 coreos-metadata[736]: Sep 13 00:51:14.606 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:51:14.615329 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:51:14.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.616614 systemd[1]: Starting ignition-mount.service... Sep 13 00:51:14.617881 systemd[1]: Starting sysroot-boot.service... Sep 13 00:51:14.625645 coreos-metadata[735]: Sep 13 00:51:14.625 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:51:14.632548 coreos-metadata[736]: Sep 13 00:51:14.632 INFO Fetch successful Sep 13 00:51:14.633081 bash[786]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:51:14.641094 coreos-metadata[735]: Sep 13 00:51:14.638 INFO Fetch successful Sep 13 00:51:14.641998 coreos-metadata[736]: Sep 13 00:51:14.641 INFO wrote hostname ci-3510.3.8-n-8fedea5c61 to /sysroot/etc/hostname Sep 13 00:51:14.644726 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 13 00:51:14.644833 systemd[1]: Finished flatcar-digitalocean-network.service. Sep 13 00:51:14.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.647157 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 00:51:14.650044 ignition[788]: INFO : Ignition 2.14.0 Sep 13 00:51:14.650676 ignition[788]: INFO : Stage: mount Sep 13 00:51:14.651724 ignition[788]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:51:14.652405 ignition[788]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:51:14.655698 ignition[788]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:51:14.657405 systemd[1]: Finished sysroot-boot.service. Sep 13 00:51:14.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:14.659021 ignition[788]: INFO : mount: mount passed Sep 13 00:51:14.660136 ignition[788]: INFO : Ignition finished successfully Sep 13 00:51:14.661804 systemd[1]: Finished ignition-mount.service. Sep 13 00:51:14.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:15.062130 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:51:15.073028 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (796) Sep 13 00:51:15.082185 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:51:15.082264 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:51:15.082277 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:51:15.087610 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:51:15.089316 systemd[1]: Starting ignition-files.service... Sep 13 00:51:15.109762 ignition[816]: INFO : Ignition 2.14.0 Sep 13 00:51:15.109762 ignition[816]: INFO : Stage: files Sep 13 00:51:15.110702 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:51:15.110702 ignition[816]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:51:15.112211 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:51:15.118010 ignition[816]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:51:15.119677 ignition[816]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:51:15.119677 ignition[816]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:51:15.122596 ignition[816]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:51:15.123280 ignition[816]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:51:15.124667 unknown[816]: wrote ssh authorized keys file for user: core Sep 13 00:51:15.125863 ignition[816]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:51:15.126764 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:51:15.127724 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:51:15.127724 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:51:15.127724 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:51:15.168051 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:51:15.278309 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:51:15.279684 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:51:15.280646 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:51:15.473424 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 13 00:51:15.599749 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:51:15.599749 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:51:15.600958 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:51:15.600958 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:51:15.600958 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:51:15.600958 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:51:15.600958 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:51:15.600958 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:51:15.605764 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:51:15.606449 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:51:15.606449 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:51:15.606449 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:51:15.606449 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:51:15.606449 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:51:15.606449 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:51:15.621176 systemd-networkd[690]: eth0: Gained IPv6LL Sep 13 00:51:15.941189 systemd-networkd[690]: eth1: Gained IPv6LL Sep 13 00:51:15.951793 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 13 00:51:16.294434 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:51:16.294434 ignition[816]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:51:16.294434 ignition[816]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:51:16.294434 ignition[816]: INFO : files: op(e): [started] processing unit "containerd.service" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(e): [finished] processing unit "containerd.service" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:51:16.297088 ignition[816]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:51:16.304869 ignition[816]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:51:16.304869 ignition[816]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:51:16.304869 ignition[816]: INFO : files: files passed Sep 13 00:51:16.304869 ignition[816]: INFO : Ignition finished successfully Sep 13 00:51:16.310307 kernel: kauditd_printk_skb: 28 callbacks suppressed Sep 13 00:51:16.310333 kernel: audit: type=1130 audit(1757724676.303:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.303681 systemd[1]: Finished ignition-files.service. Sep 13 00:51:16.310353 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:51:16.310925 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:51:16.312105 systemd[1]: Starting ignition-quench.service... Sep 13 00:51:16.318103 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:51:16.318233 systemd[1]: Finished ignition-quench.service. Sep 13 00:51:16.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.324476 kernel: audit: type=1130 audit(1757724676.318:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.324539 kernel: audit: type=1131 audit(1757724676.318:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.326934 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:51:16.327617 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:51:16.331213 kernel: audit: type=1130 audit(1757724676.327:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.328289 systemd[1]: Reached target ignition-complete.target. Sep 13 00:51:16.332592 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:51:16.349200 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:51:16.349870 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:51:16.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.354991 kernel: audit: type=1130 audit(1757724676.349:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.355038 kernel: audit: type=1131 audit(1757724676.352:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.352822 systemd[1]: Reached target initrd-fs.target. Sep 13 00:51:16.355429 systemd[1]: Reached target initrd.target. Sep 13 00:51:16.356014 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:51:16.357118 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:51:16.371246 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:51:16.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.378995 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:51:16.381049 kernel: audit: type=1130 audit(1757724676.376:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.389414 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:51:16.390421 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:51:16.391283 systemd[1]: Stopped target timers.target. Sep 13 00:51:16.392068 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:51:16.392636 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:51:16.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.395228 systemd[1]: Stopped target initrd.target. Sep 13 00:51:16.396440 kernel: audit: type=1131 audit(1757724676.392:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.396231 systemd[1]: Stopped target basic.target. Sep 13 00:51:16.396776 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:51:16.397313 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:51:16.398102 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:51:16.398596 systemd[1]: Stopped target remote-fs.target. Sep 13 00:51:16.399199 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:51:16.399797 systemd[1]: Stopped target sysinit.target. Sep 13 00:51:16.400376 systemd[1]: Stopped target local-fs.target. Sep 13 00:51:16.401006 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:51:16.401610 systemd[1]: Stopped target swap.target. Sep 13 00:51:16.402209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:51:16.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.402381 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:51:16.403011 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:51:16.406245 kernel: audit: type=1131 audit(1757724676.402:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.406591 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:51:16.406719 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:51:16.409849 kernel: audit: type=1131 audit(1757724676.406:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.407504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:51:16.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.407602 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:51:16.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.410359 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:51:16.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.410477 systemd[1]: Stopped ignition-files.service. Sep 13 00:51:16.411010 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:51:16.411109 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 00:51:16.413127 systemd[1]: Stopping ignition-mount.service... Sep 13 00:51:16.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.414684 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:51:16.415095 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:51:16.415238 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:51:16.415730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:51:16.415829 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:51:16.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.429325 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:51:16.429439 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:51:16.431610 ignition[854]: INFO : Ignition 2.14.0 Sep 13 00:51:16.431610 ignition[854]: INFO : Stage: umount Sep 13 00:51:16.432722 ignition[854]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:51:16.432722 ignition[854]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:51:16.434067 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:51:16.436469 ignition[854]: INFO : umount: umount passed Sep 13 00:51:16.436469 ignition[854]: INFO : Ignition finished successfully Sep 13 00:51:16.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.435591 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:51:16.435682 systemd[1]: Stopped ignition-mount.service. Sep 13 00:51:16.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.436129 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:51:16.436174 systemd[1]: Stopped ignition-disks.service. Sep 13 00:51:16.436622 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:51:16.436678 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:51:16.451630 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:51:16.451691 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:51:16.452069 systemd[1]: Stopped target network.target. Sep 13 00:51:16.452393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:51:16.452436 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:51:16.456760 systemd[1]: Stopped target paths.target. Sep 13 00:51:16.457061 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:51:16.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.463456 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:51:16.464573 systemd[1]: Stopped target slices.target. Sep 13 00:51:16.465232 systemd[1]: Stopped target sockets.target. Sep 13 00:51:16.466575 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:51:16.466612 systemd[1]: Closed iscsid.socket. Sep 13 00:51:16.469196 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:51:16.469229 systemd[1]: Closed iscsiuio.socket. Sep 13 00:51:16.469841 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:51:16.469893 systemd[1]: Stopped ignition-setup.service. Sep 13 00:51:16.471113 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:51:16.471704 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:51:16.478293 systemd-networkd[690]: eth1: DHCPv6 lease lost Sep 13 00:51:16.478820 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:51:16.479467 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:51:16.479560 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:51:16.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.480418 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:51:16.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.480461 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:51:16.482118 systemd-networkd[690]: eth0: DHCPv6 lease lost Sep 13 00:51:16.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.483129 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:51:16.483219 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:51:16.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.484266 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:51:16.484372 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:51:16.485000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:51:16.485000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:51:16.485639 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:51:16.485674 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:51:16.487169 systemd[1]: Stopping network-cleanup.service... Sep 13 00:51:16.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.487530 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:51:16.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.487593 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:51:16.488356 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:51:16.488404 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:51:16.490748 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:51:16.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.490793 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:51:16.491399 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:51:16.496960 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:51:16.500556 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:51:16.501122 systemd[1]: Stopped network-cleanup.service. Sep 13 00:51:16.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.503239 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:51:16.503800 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:51:16.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.504890 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:51:16.505449 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:51:16.506294 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:51:16.506823 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:51:16.513153 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:51:16.513247 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:51:16.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.514176 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:51:16.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.514241 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:51:16.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.514699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:51:16.514744 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:51:16.516575 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:51:16.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.517403 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:51:16.517486 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:51:16.518159 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:51:16.518217 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:51:16.518688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:51:16.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.518745 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:51:16.520573 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:51:16.525378 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:51:16.525526 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:51:16.526417 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:51:16.528000 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:51:16.536436 systemd[1]: Switching root. Sep 13 00:51:16.538000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:51:16.538000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:51:16.540000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:51:16.540000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:51:16.540000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:51:16.559265 iscsid[696]: iscsid shutting down. Sep 13 00:51:16.559817 systemd-journald[184]: Journal stopped Sep 13 00:51:19.908105 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 13 00:51:19.908196 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:51:19.908220 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:51:19.908240 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:51:19.908259 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:51:19.908283 kernel: SELinux: policy capability open_perms=1 Sep 13 00:51:19.908300 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:51:19.908319 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:51:19.908376 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:51:19.908394 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:51:19.908411 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:51:19.908428 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:51:19.908448 systemd[1]: Successfully loaded SELinux policy in 43.761ms. Sep 13 00:51:19.908482 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.384ms. Sep 13 00:51:19.908503 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:51:19.908524 systemd[1]: Detected virtualization kvm. Sep 13 00:51:19.908537 systemd[1]: Detected architecture x86-64. Sep 13 00:51:19.908550 systemd[1]: Detected first boot. Sep 13 00:51:19.908563 systemd[1]: Hostname set to . Sep 13 00:51:19.908576 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:51:19.908592 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:51:19.908605 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:51:19.908629 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:51:19.908646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:51:19.908661 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:51:19.908686 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:51:19.908699 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:51:19.908713 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:51:19.908728 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:51:19.908741 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:51:19.908755 systemd[1]: Created slice system-getty.slice. Sep 13 00:51:19.908775 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:51:19.908796 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:51:19.908816 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:51:19.908835 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:51:19.908849 systemd[1]: Created slice user.slice. Sep 13 00:51:19.908867 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:51:19.908892 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:51:19.908910 systemd[1]: Set up automount boot.automount. Sep 13 00:51:19.908939 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:51:19.908960 systemd[1]: Reached target integritysetup.target. Sep 13 00:51:19.908977 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:51:19.909014 systemd[1]: Reached target remote-fs.target. Sep 13 00:51:19.909026 systemd[1]: Reached target slices.target. Sep 13 00:51:19.909044 systemd[1]: Reached target swap.target. Sep 13 00:51:19.909067 systemd[1]: Reached target torcx.target. Sep 13 00:51:19.909180 systemd[1]: Reached target veritysetup.target. Sep 13 00:51:19.909199 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:51:19.909213 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:51:19.909226 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:51:19.909238 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:51:19.909251 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:51:19.909266 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:51:19.909292 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:51:19.909310 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:51:19.909328 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:51:19.909347 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:51:19.909373 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:51:19.909386 systemd[1]: Mounting media.mount... Sep 13 00:51:19.909399 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:19.909413 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:51:19.909426 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:51:19.909440 systemd[1]: Mounting tmp.mount... Sep 13 00:51:19.909460 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:51:19.909474 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:19.909486 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:51:19.909529 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:51:19.909551 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:19.909575 systemd[1]: Starting modprobe@drm.service... Sep 13 00:51:19.909593 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:19.909610 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:51:19.909628 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:19.909655 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:51:19.909673 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:51:19.909692 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:51:19.909709 systemd[1]: Starting systemd-journald.service... Sep 13 00:51:19.909728 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:51:19.909746 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:51:19.909763 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:51:19.926486 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:51:19.926528 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:19.926550 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:51:19.926563 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:51:19.926576 systemd[1]: Mounted media.mount. Sep 13 00:51:19.926599 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:51:19.926611 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:51:19.926624 systemd[1]: Mounted tmp.mount. Sep 13 00:51:19.926637 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:51:19.926650 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:51:19.926663 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:51:19.926677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:19.926690 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:19.926703 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:51:19.926723 systemd[1]: Finished modprobe@drm.service. Sep 13 00:51:19.926743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:19.926768 systemd-journald[992]: Journal started Sep 13 00:51:19.928231 systemd-journald[992]: Runtime Journal (/run/log/journal/118dd015443d4988a9e8f97e57f5d38f) is 4.9M, max 39.5M, 34.5M free. Sep 13 00:51:19.928326 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:19.772000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:51:19.772000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:51:19.906000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:51:19.906000 audit[992]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffdc8543470 a2=4000 a3=7ffdc854350c items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:19.906000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:51:19.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.933297 systemd[1]: Started systemd-journald.service. Sep 13 00:51:19.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.937704 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:51:19.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.938782 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:51:19.939437 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:51:19.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.940094 systemd[1]: Reached target network-pre.target. Sep 13 00:51:19.942102 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:51:19.942484 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:51:19.946921 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:51:19.948776 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:51:19.949239 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:51:19.966276 kernel: fuse: init (API version 7.34) Sep 13 00:51:19.950762 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:51:19.954677 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:51:19.956703 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:51:19.968493 kernel: loop: module loaded Sep 13 00:51:19.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.968802 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:51:19.969458 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:51:19.969748 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:51:19.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.970208 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:51:19.981776 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:51:19.982699 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:19.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.988861 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:19.990550 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:51:19.991018 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:51:19.993630 systemd-journald[992]: Time spent on flushing to /var/log/journal/118dd015443d4988a9e8f97e57f5d38f is 39.710ms for 1089 entries. Sep 13 00:51:19.993630 systemd-journald[992]: System Journal (/var/log/journal/118dd015443d4988a9e8f97e57f5d38f) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:51:20.046219 systemd-journald[992]: Received client request to flush runtime journal. Sep 13 00:51:20.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.011176 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:51:20.047257 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:51:20.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.081150 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:51:20.084571 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:51:20.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.093846 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:51:20.095607 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:51:20.108848 udevadm[1049]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:51:20.118142 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:51:20.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.120044 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:51:20.148771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:51:20.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.629041 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:51:20.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.631280 systemd[1]: Starting systemd-udevd.service... Sep 13 00:51:20.654733 systemd-udevd[1055]: Using default interface naming scheme 'v252'. Sep 13 00:51:20.679066 systemd[1]: Started systemd-udevd.service. Sep 13 00:51:20.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.681594 systemd[1]: Starting systemd-networkd.service... Sep 13 00:51:20.691273 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:51:20.754175 systemd[1]: Started systemd-userdbd.service. Sep 13 00:51:20.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.757673 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:51:20.774095 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:20.775347 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:20.776978 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:20.779386 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:20.779782 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:51:20.779859 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:51:20.780425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:20.780599 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:20.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.781442 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:51:20.785246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:20.785416 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:20.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.798358 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:20.798541 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:20.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.799087 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:51:20.833788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:51:20.894307 systemd-networkd[1062]: lo: Link UP Sep 13 00:51:20.894320 systemd-networkd[1062]: lo: Gained carrier Sep 13 00:51:20.894850 systemd-networkd[1062]: Enumeration completed Sep 13 00:51:20.895010 systemd[1]: Started systemd-networkd.service. Sep 13 00:51:20.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.895522 systemd-networkd[1062]: eth1: Configuring with /run/systemd/network/10-ea:d4:0a:4f:b2:56.network. Sep 13 00:51:20.898252 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:20.898279 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:20.898405 systemd-networkd[1062]: eth0: Configuring with /run/systemd/network/10-46:32:6c:c8:a1:54.network. Sep 13 00:51:20.899166 systemd-networkd[1062]: eth1: Link UP Sep 13 00:51:20.899176 systemd-networkd[1062]: eth1: Gained carrier Sep 13 00:51:20.903332 systemd-networkd[1062]: eth0: Link UP Sep 13 00:51:20.903342 systemd-networkd[1062]: eth0: Gained carrier Sep 13 00:51:20.904004 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:51:20.913035 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:51:20.929000 audit[1071]: AVC avc: denied { confidentiality } for pid=1071 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:51:20.929000 audit[1071]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556814db0320 a1=338ec a2=7fb8840a2bc5 a3=5 items=110 ppid=1055 pid=1071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:20.929000 audit: CWD cwd="/" Sep 13 00:51:20.929000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=1 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=2 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=3 name=(null) inode=13906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=4 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=5 name=(null) inode=13907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=6 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=7 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=8 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=9 name=(null) inode=13909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=10 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=11 name=(null) inode=13910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=12 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=13 name=(null) inode=13911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=14 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=15 name=(null) inode=13912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=16 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=17 name=(null) inode=13913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=18 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=19 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=20 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=21 name=(null) inode=13915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=22 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=23 name=(null) inode=13916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=24 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=25 name=(null) inode=13917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=26 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=27 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=28 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=29 name=(null) inode=13919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=30 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=31 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=32 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=33 name=(null) inode=13921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=34 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=35 name=(null) inode=13922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=36 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=37 name=(null) inode=13923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=38 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=39 name=(null) inode=13924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=40 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=41 name=(null) inode=13925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=42 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=43 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=44 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=45 name=(null) inode=13927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=46 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=47 name=(null) inode=13928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=48 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=49 name=(null) inode=13929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=50 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=51 name=(null) inode=13930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=52 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=53 name=(null) inode=13931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=55 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=56 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=57 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=58 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=59 name=(null) inode=13934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=60 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=61 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=62 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=63 name=(null) inode=13936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=64 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=65 name=(null) inode=13937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=66 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=67 name=(null) inode=13938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=68 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=69 name=(null) inode=13939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=70 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=71 name=(null) inode=13940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=72 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=73 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=74 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=75 name=(null) inode=13942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=76 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=77 name=(null) inode=13943 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=78 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=79 name=(null) inode=13944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=80 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=81 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=82 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=83 name=(null) inode=13946 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=84 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=85 name=(null) inode=13947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=86 name=(null) inode=13947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=87 name=(null) inode=13948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=88 name=(null) inode=13947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=89 name=(null) inode=13949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=90 name=(null) inode=13947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=91 name=(null) inode=13950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=92 name=(null) inode=13947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=93 name=(null) inode=13951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=94 name=(null) inode=13947 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=95 name=(null) inode=13952 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=96 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=97 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=98 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=99 name=(null) inode=13954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=100 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=101 name=(null) inode=13955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=102 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=103 name=(null) inode=13956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=104 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=105 name=(null) inode=13957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=106 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=107 name=(null) inode=13958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PATH item=109 name=(null) inode=13959 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:20.929000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:51:20.994004 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 13 00:51:20.997099 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:51:21.001011 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:51:21.087012 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:51:21.111496 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:51:21.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.113432 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:51:21.130772 lvm[1098]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:51:21.155384 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:51:21.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.155870 systemd[1]: Reached target cryptsetup.target. Sep 13 00:51:21.157669 systemd[1]: Starting lvm2-activation.service... Sep 13 00:51:21.166176 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:51:21.193359 systemd[1]: Finished lvm2-activation.service. Sep 13 00:51:21.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.193869 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:51:21.195833 systemd[1]: Mounting media-configdrive.mount... Sep 13 00:51:21.196257 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:51:21.196318 systemd[1]: Reached target machines.target. Sep 13 00:51:21.197977 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:51:21.210604 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:51:21.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.216061 kernel: ISO 9660 Extensions: RRIP_1991A Sep 13 00:51:21.216580 systemd[1]: Mounted media-configdrive.mount. Sep 13 00:51:21.217006 systemd[1]: Reached target local-fs.target. Sep 13 00:51:21.219036 systemd[1]: Starting ldconfig.service... Sep 13 00:51:21.220336 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.220414 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:21.222421 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:51:21.227238 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:51:21.231592 systemd[1]: Starting systemd-sysext.service... Sep 13 00:51:21.241488 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Sep 13 00:51:21.243069 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:51:21.249658 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:51:21.259264 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:51:21.259550 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:51:21.274659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:51:21.275922 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:51:21.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.290178 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:51:21.318452 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:51:21.339231 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:51:21.352681 (sd-sysext)[1123]: Using extensions 'kubernetes'. Sep 13 00:51:21.355140 (sd-sysext)[1123]: Merged extensions into '/usr'. Sep 13 00:51:21.381324 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:21.383507 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:51:21.384231 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.386313 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:21.388510 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:21.390959 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:21.391408 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.391567 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:21.391747 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:21.395415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:21.395624 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:21.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.399462 kernel: kauditd_printk_skb: 206 callbacks suppressed Sep 13 00:51:21.399632 kernel: audit: type=1130 audit(1757724681.395:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.399684 kernel: audit: type=1131 audit(1757724681.399:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.403669 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:21.403944 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:21.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.404833 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:51:21.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.410681 kernel: audit: type=1130 audit(1757724681.403:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.410835 kernel: audit: type=1131 audit(1757724681.403:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.423420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:21.423677 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:21.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.424416 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.429357 kernel: audit: type=1130 audit(1757724681.423:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.429596 kernel: audit: type=1131 audit(1757724681.423:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.440075 kernel: audit: type=1130 audit(1757724681.436:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.432458 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:51:21.436635 systemd[1]: Finished systemd-sysext.service. Sep 13 00:51:21.443660 systemd[1]: Starting ensure-sysext.service... Sep 13 00:51:21.447441 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:51:21.456731 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Sep 13 00:51:21.456731 systemd-fsck[1120]: /dev/vda1: 790 files, 120761/258078 clusters Sep 13 00:51:21.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.481124 kernel: audit: type=1130 audit(1757724681.477:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.477547 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:51:21.489283 systemd[1]: Reloading. Sep 13 00:51:21.494497 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:51:21.500325 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:51:21.510390 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:51:21.612236 /usr/lib/systemd/system-generators/torcx-generator[1158]: time="2025-09-13T00:51:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:51:21.612292 /usr/lib/systemd/system-generators/torcx-generator[1158]: time="2025-09-13T00:51:21Z" level=info msg="torcx already run" Sep 13 00:51:21.666915 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:51:21.768039 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:51:21.768064 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:51:21.792224 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:51:21.870168 systemd[1]: Finished ldconfig.service. Sep 13 00:51:21.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.873022 kernel: audit: type=1130 audit(1757724681.869:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.886025 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.888383 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:21.891480 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:21.896519 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:21.897779 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.898407 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:21.908332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:21.909024 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:21.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.914018 kernel: audit: type=1130 audit(1757724681.908:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.916624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:21.917338 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:21.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.919215 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:21.919772 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:21.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.926287 systemd[1]: Mounting boot.mount... Sep 13 00:51:21.927469 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:21.928314 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.931425 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:21.935134 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:21.938755 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:21.950188 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.950523 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:21.950783 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:21.964498 systemd[1]: Mounted boot.mount. Sep 13 00:51:21.966423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:21.969531 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:21.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:21.972758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:21.973072 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:21.974409 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:21.974688 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:21.979590 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:21.980187 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:21.982901 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:21.986171 systemd[1]: Starting modprobe@drm.service... Sep 13 00:51:21.989337 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:21.997198 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:22.000720 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:22.001079 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:22.007217 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:51:22.008795 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:22.014557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:22.014917 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:22.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.020699 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:51:22.021183 systemd[1]: Finished modprobe@drm.service. Sep 13 00:51:22.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.022892 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:51:22.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.024148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:22.024430 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:22.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.030978 systemd[1]: Finished ensure-sysext.service. Sep 13 00:51:22.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.032348 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:51:22.039763 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:22.040089 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:22.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.040830 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:51:22.105706 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:51:22.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.108394 systemd[1]: Starting audit-rules.service... Sep 13 00:51:22.110874 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:51:22.113484 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:51:22.120559 systemd[1]: Starting systemd-resolved.service... Sep 13 00:51:22.125601 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:51:22.131783 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:51:22.135045 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:51:22.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.138545 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:51:22.162000 audit[1250]: SYSTEM_BOOT pid=1250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.170230 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:51:22.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.171913 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:51:22.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.175170 systemd[1]: Starting systemd-update-done.service... Sep 13 00:51:22.198125 systemd[1]: Finished systemd-update-done.service. Sep 13 00:51:22.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.237000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:51:22.237000 audit[1268]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe6b994640 a2=420 a3=0 items=0 ppid=1243 pid=1268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:22.237000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:51:22.238847 augenrules[1268]: No rules Sep 13 00:51:22.239662 systemd[1]: Finished audit-rules.service. Sep 13 00:51:22.258499 systemd-resolved[1246]: Positive Trust Anchors: Sep 13 00:51:22.258515 systemd-resolved[1246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:51:22.258548 systemd-resolved[1246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:51:22.259307 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:51:22.259872 systemd[1]: Reached target time-set.target. Sep 13 00:51:22.265998 systemd-resolved[1246]: Using system hostname 'ci-3510.3.8-n-8fedea5c61'. Sep 13 00:51:22.268573 systemd[1]: Started systemd-resolved.service. Sep 13 00:51:22.269069 systemd[1]: Reached target network.target. Sep 13 00:51:22.269356 systemd[1]: Reached target nss-lookup.target. Sep 13 00:51:22.269672 systemd[1]: Reached target sysinit.target. Sep 13 00:51:22.270073 systemd[1]: Started motdgen.path. Sep 13 00:51:22.270389 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:51:22.270923 systemd[1]: Started logrotate.timer. Sep 13 00:51:22.271314 systemd[1]: Started mdadm.timer. Sep 13 00:51:22.271582 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:51:22.271873 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:51:22.271904 systemd[1]: Reached target paths.target. Sep 13 00:51:22.272173 systemd[1]: Reached target timers.target. Sep 13 00:51:22.272884 systemd[1]: Listening on dbus.socket. Sep 13 00:51:22.274954 systemd[1]: Starting docker.socket... Sep 13 00:51:22.277449 systemd[1]: Listening on sshd.socket. Sep 13 00:51:22.277905 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:22.278356 systemd[1]: Listening on docker.socket. Sep 13 00:51:22.278674 systemd[1]: Reached target sockets.target. Sep 13 00:51:22.278946 systemd[1]: Reached target basic.target. Sep 13 00:51:22.279409 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:51:22.279477 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:51:22.279514 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:51:22.281141 systemd[1]: Starting containerd.service... Sep 13 00:51:22.283680 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:51:22.287779 systemd[1]: Starting dbus.service... Sep 13 00:51:22.290123 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:51:22.292328 systemd[1]: Starting extend-filesystems.service... Sep 13 00:51:22.296266 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:51:22.304173 systemd[1]: Starting motdgen.service... Sep 13 00:51:22.308412 systemd[1]: Starting prepare-helm.service... Sep 13 00:51:22.309016 jq[1281]: false Sep 13 00:51:22.313197 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:51:22.316499 systemd[1]: Starting sshd-keygen.service... Sep 13 00:51:23.007079 systemd-resolved[1246]: Clock change detected. Flushing caches. Sep 13 00:51:23.007458 systemd-timesyncd[1248]: Contacted time server 51.81.20.74:123 (0.flatcar.pool.ntp.org). Sep 13 00:51:23.007537 systemd-timesyncd[1248]: Initial clock synchronization to Sat 2025-09-13 00:51:23.007010 UTC. Sep 13 00:51:23.014609 systemd[1]: Starting systemd-logind.service... Sep 13 00:51:23.015080 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:23.015195 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:51:23.018224 systemd[1]: Starting update-engine.service... Sep 13 00:51:23.026656 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:51:23.034581 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:51:23.034939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:51:23.036605 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:51:23.056800 jq[1303]: true Sep 13 00:51:23.037790 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:51:23.075545 jq[1307]: true Sep 13 00:51:23.080979 tar[1306]: linux-amd64/helm Sep 13 00:51:23.090964 systemd-networkd[1062]: eth1: Gained IPv6LL Sep 13 00:51:23.097074 extend-filesystems[1284]: Found loop1 Sep 13 00:51:23.097074 extend-filesystems[1284]: Found vda Sep 13 00:51:23.098235 extend-filesystems[1284]: Found vda1 Sep 13 00:51:23.098235 extend-filesystems[1284]: Found vda2 Sep 13 00:51:23.098235 extend-filesystems[1284]: Found vda3 Sep 13 00:51:23.098235 extend-filesystems[1284]: Found usr Sep 13 00:51:23.098235 extend-filesystems[1284]: Found vda4 Sep 13 00:51:23.098235 extend-filesystems[1284]: Found vda6 Sep 13 00:51:23.098235 extend-filesystems[1284]: Found vda7 Sep 13 00:51:23.098235 extend-filesystems[1284]: Found vda9 Sep 13 00:51:23.098235 extend-filesystems[1284]: Checking size of /dev/vda9 Sep 13 00:51:23.102501 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:51:23.103333 systemd[1]: Reached target network-online.target. Sep 13 00:51:23.106316 systemd[1]: Starting kubelet.service... Sep 13 00:51:23.112745 dbus-daemon[1279]: [system] SELinux support is enabled Sep 13 00:51:23.113061 systemd[1]: Started dbus.service. Sep 13 00:51:23.115993 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:51:23.116037 systemd[1]: Reached target system-config.target. Sep 13 00:51:23.116539 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:51:23.116559 systemd[1]: Reached target user-config.target. Sep 13 00:51:23.138891 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:51:23.139203 systemd[1]: Finished motdgen.service. Sep 13 00:51:23.171517 extend-filesystems[1284]: Resized partition /dev/vda9 Sep 13 00:51:23.184168 extend-filesystems[1333]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:51:23.188975 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 13 00:51:23.196522 update_engine[1298]: I0913 00:51:23.195735 1298 main.cc:92] Flatcar Update Engine starting Sep 13 00:51:23.207385 systemd[1]: Started update-engine.service. Sep 13 00:51:23.208071 update_engine[1298]: I0913 00:51:23.207947 1298 update_check_scheduler.cc:74] Next update check in 11m26s Sep 13 00:51:23.210380 systemd[1]: Started locksmithd.service. Sep 13 00:51:23.272780 bash[1344]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:51:23.273875 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:51:23.283585 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 00:51:23.293643 extend-filesystems[1333]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:51:23.293643 extend-filesystems[1333]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 00:51:23.293643 extend-filesystems[1333]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 00:51:23.295872 extend-filesystems[1284]: Resized filesystem in /dev/vda9 Sep 13 00:51:23.295872 extend-filesystems[1284]: Found vdb Sep 13 00:51:23.294393 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:51:23.294716 systemd[1]: Finished extend-filesystems.service. Sep 13 00:51:23.349076 coreos-metadata[1278]: Sep 13 00:51:23.348 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:51:23.360034 env[1309]: time="2025-09-13T00:51:23.359401672Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:51:23.376865 coreos-metadata[1278]: Sep 13 00:51:23.375 INFO Fetch successful Sep 13 00:51:23.382414 unknown[1278]: wrote ssh authorized keys file for user: core Sep 13 00:51:23.393424 update-ssh-keys[1353]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:51:23.394209 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:51:23.418988 systemd-logind[1297]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:51:23.419458 systemd-logind[1297]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:51:23.419775 systemd-logind[1297]: New seat seat0. Sep 13 00:51:23.426931 systemd[1]: Started systemd-logind.service. Sep 13 00:51:23.446122 env[1309]: time="2025-09-13T00:51:23.446017813Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:51:23.446503 env[1309]: time="2025-09-13T00:51:23.446482660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:23.450803 env[1309]: time="2025-09-13T00:51:23.450741541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:23.451077 env[1309]: time="2025-09-13T00:51:23.451054553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:23.451573 env[1309]: time="2025-09-13T00:51:23.451542211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:23.451674 env[1309]: time="2025-09-13T00:51:23.451658117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:23.451741 env[1309]: time="2025-09-13T00:51:23.451724705Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:51:23.451819 env[1309]: time="2025-09-13T00:51:23.451805676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:23.452054 env[1309]: time="2025-09-13T00:51:23.452025788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:23.452516 env[1309]: time="2025-09-13T00:51:23.452494376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:23.452820 env[1309]: time="2025-09-13T00:51:23.452798731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:23.452893 env[1309]: time="2025-09-13T00:51:23.452878567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:51:23.453027 env[1309]: time="2025-09-13T00:51:23.453009742Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:51:23.453133 env[1309]: time="2025-09-13T00:51:23.453113901Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:51:23.457028 env[1309]: time="2025-09-13T00:51:23.456966730Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:51:23.457190 env[1309]: time="2025-09-13T00:51:23.457173571Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:51:23.457283 env[1309]: time="2025-09-13T00:51:23.457258088Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:51:23.457389 env[1309]: time="2025-09-13T00:51:23.457374919Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.457503 env[1309]: time="2025-09-13T00:51:23.457489820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.457573 env[1309]: time="2025-09-13T00:51:23.457560191Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.457634 env[1309]: time="2025-09-13T00:51:23.457621217Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.457694 env[1309]: time="2025-09-13T00:51:23.457681441Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.457760 env[1309]: time="2025-09-13T00:51:23.457747378Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.457833 env[1309]: time="2025-09-13T00:51:23.457819881Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.457893 env[1309]: time="2025-09-13T00:51:23.457880477Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.457978 env[1309]: time="2025-09-13T00:51:23.457963721Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:51:23.458174 env[1309]: time="2025-09-13T00:51:23.458154920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:51:23.458355 env[1309]: time="2025-09-13T00:51:23.458338687Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:51:23.458849 env[1309]: time="2025-09-13T00:51:23.458825784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:51:23.459096 env[1309]: time="2025-09-13T00:51:23.459078364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459170 env[1309]: time="2025-09-13T00:51:23.459155972Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:51:23.459279 env[1309]: time="2025-09-13T00:51:23.459264996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459345 env[1309]: time="2025-09-13T00:51:23.459331125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459408 env[1309]: time="2025-09-13T00:51:23.459393692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459518 env[1309]: time="2025-09-13T00:51:23.459500947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459590 env[1309]: time="2025-09-13T00:51:23.459576464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459659 env[1309]: time="2025-09-13T00:51:23.459646921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459719 env[1309]: time="2025-09-13T00:51:23.459707057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459779 env[1309]: time="2025-09-13T00:51:23.459766570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.459850 env[1309]: time="2025-09-13T00:51:23.459836951Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:51:23.460112 env[1309]: time="2025-09-13T00:51:23.460091721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.460200 env[1309]: time="2025-09-13T00:51:23.460185471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.460264 env[1309]: time="2025-09-13T00:51:23.460250874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.460340 env[1309]: time="2025-09-13T00:51:23.460325107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:51:23.460438 env[1309]: time="2025-09-13T00:51:23.460417413Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:51:23.460504 env[1309]: time="2025-09-13T00:51:23.460491116Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:51:23.460588 env[1309]: time="2025-09-13T00:51:23.460572420Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:51:23.460680 env[1309]: time="2025-09-13T00:51:23.460665611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:51:23.461045 env[1309]: time="2025-09-13T00:51:23.460983684Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:51:23.463058 env[1309]: time="2025-09-13T00:51:23.461243897Z" level=info msg="Connect containerd service" Sep 13 00:51:23.463058 env[1309]: time="2025-09-13T00:51:23.461290080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:51:23.482486 env[1309]: time="2025-09-13T00:51:23.482418792Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:51:23.483201 env[1309]: time="2025-09-13T00:51:23.483169136Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:51:23.483374 env[1309]: time="2025-09-13T00:51:23.483358333Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:51:23.483536 env[1309]: time="2025-09-13T00:51:23.483518584Z" level=info msg="containerd successfully booted in 0.125715s" Sep 13 00:51:23.484052 systemd[1]: Started containerd.service. Sep 13 00:51:23.494053 env[1309]: time="2025-09-13T00:51:23.493970973Z" level=info msg="Start subscribing containerd event" Sep 13 00:51:23.494843 env[1309]: time="2025-09-13T00:51:23.494810914Z" level=info msg="Start recovering state" Sep 13 00:51:23.495098 env[1309]: time="2025-09-13T00:51:23.495082736Z" level=info msg="Start event monitor" Sep 13 00:51:23.495180 env[1309]: time="2025-09-13T00:51:23.495167088Z" level=info msg="Start snapshots syncer" Sep 13 00:51:23.495277 env[1309]: time="2025-09-13T00:51:23.495258847Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:51:23.495423 env[1309]: time="2025-09-13T00:51:23.495408709Z" level=info msg="Start streaming server" Sep 13 00:51:23.537519 systemd-networkd[1062]: eth0: Gained IPv6LL Sep 13 00:51:23.922455 tar[1306]: linux-amd64/LICENSE Sep 13 00:51:23.922943 tar[1306]: linux-amd64/README.md Sep 13 00:51:23.932024 systemd[1]: Finished prepare-helm.service. Sep 13 00:51:24.100070 locksmithd[1337]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:51:24.631614 systemd[1]: Started kubelet.service. Sep 13 00:51:25.166696 sshd_keygen[1315]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:51:25.205754 systemd[1]: Finished sshd-keygen.service. Sep 13 00:51:25.209129 systemd[1]: Starting issuegen.service... Sep 13 00:51:25.227155 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:51:25.227551 systemd[1]: Finished issuegen.service. Sep 13 00:51:25.230823 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:51:25.247851 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:51:25.250224 systemd[1]: Started getty@tty1.service. Sep 13 00:51:25.252487 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:51:25.253651 systemd[1]: Reached target getty.target. Sep 13 00:51:25.254102 systemd[1]: Reached target multi-user.target. Sep 13 00:51:25.260724 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:51:25.272058 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:51:25.272326 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:51:25.280049 systemd[1]: Startup finished in 5.816s (kernel) + 7.903s (userspace) = 13.720s. Sep 13 00:51:25.465773 kubelet[1371]: E0913 00:51:25.465646 1371 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:25.468543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:25.468755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:25.844595 systemd[1]: Created slice system-sshd.slice. Sep 13 00:51:25.846450 systemd[1]: Started sshd@0-143.110.227.187:22-147.75.109.163:42986.service. Sep 13 00:51:25.914058 sshd[1397]: Accepted publickey for core from 147.75.109.163 port 42986 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:51:25.916262 sshd[1397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:25.926796 systemd[1]: Created slice user-500.slice. Sep 13 00:51:25.928098 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:51:25.931402 systemd-logind[1297]: New session 1 of user core. Sep 13 00:51:25.941024 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:51:25.942742 systemd[1]: Starting user@500.service... Sep 13 00:51:25.951258 (systemd)[1402]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:26.039906 systemd[1402]: Queued start job for default target default.target. Sep 13 00:51:26.040193 systemd[1402]: Reached target paths.target. Sep 13 00:51:26.040212 systemd[1402]: Reached target sockets.target. Sep 13 00:51:26.040225 systemd[1402]: Reached target timers.target. Sep 13 00:51:26.040237 systemd[1402]: Reached target basic.target. Sep 13 00:51:26.040480 systemd[1]: Started user@500.service. Sep 13 00:51:26.041620 systemd[1]: Started session-1.scope. Sep 13 00:51:26.042038 systemd[1402]: Reached target default.target. Sep 13 00:51:26.042314 systemd[1402]: Startup finished in 82ms. Sep 13 00:51:26.099637 systemd[1]: Started sshd@1-143.110.227.187:22-147.75.109.163:42998.service. Sep 13 00:51:26.146527 sshd[1411]: Accepted publickey for core from 147.75.109.163 port 42998 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:51:26.148508 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:26.155002 systemd[1]: Started session-2.scope. Sep 13 00:51:26.155358 systemd-logind[1297]: New session 2 of user core. Sep 13 00:51:26.218698 sshd[1411]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:26.224587 systemd[1]: Started sshd@2-143.110.227.187:22-147.75.109.163:43000.service. Sep 13 00:51:26.225228 systemd[1]: sshd@1-143.110.227.187:22-147.75.109.163:42998.service: Deactivated successfully. Sep 13 00:51:26.227738 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:51:26.228584 systemd-logind[1297]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:51:26.230293 systemd-logind[1297]: Removed session 2. Sep 13 00:51:26.276048 sshd[1417]: Accepted publickey for core from 147.75.109.163 port 43000 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:51:26.277930 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:26.285142 systemd-logind[1297]: New session 3 of user core. Sep 13 00:51:26.288450 systemd[1]: Started session-3.scope. Sep 13 00:51:26.346924 sshd[1417]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:26.352400 systemd[1]: Started sshd@3-143.110.227.187:22-147.75.109.163:43014.service. Sep 13 00:51:26.353553 systemd[1]: sshd@2-143.110.227.187:22-147.75.109.163:43000.service: Deactivated successfully. Sep 13 00:51:26.355030 systemd-logind[1297]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:51:26.355166 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:51:26.361647 systemd-logind[1297]: Removed session 3. Sep 13 00:51:26.407130 sshd[1423]: Accepted publickey for core from 147.75.109.163 port 43014 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:51:26.409248 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:26.415032 systemd[1]: Started session-4.scope. Sep 13 00:51:26.416276 systemd-logind[1297]: New session 4 of user core. Sep 13 00:51:26.480197 sshd[1423]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:26.485196 systemd[1]: Started sshd@4-143.110.227.187:22-147.75.109.163:43026.service. Sep 13 00:51:26.486969 systemd-logind[1297]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:51:26.487237 systemd[1]: sshd@3-143.110.227.187:22-147.75.109.163:43014.service: Deactivated successfully. Sep 13 00:51:26.488207 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:51:26.488749 systemd-logind[1297]: Removed session 4. Sep 13 00:51:26.537172 sshd[1430]: Accepted publickey for core from 147.75.109.163 port 43026 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:51:26.539107 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:26.544996 systemd-logind[1297]: New session 5 of user core. Sep 13 00:51:26.546208 systemd[1]: Started session-5.scope. Sep 13 00:51:26.622739 sudo[1436]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:51:26.623509 sudo[1436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:51:26.662323 systemd[1]: Starting docker.service... Sep 13 00:51:26.723802 env[1446]: time="2025-09-13T00:51:26.723721958Z" level=info msg="Starting up" Sep 13 00:51:26.725625 env[1446]: time="2025-09-13T00:51:26.725578854Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:51:26.725625 env[1446]: time="2025-09-13T00:51:26.725607136Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:51:26.725625 env[1446]: time="2025-09-13T00:51:26.725629938Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:51:26.725874 env[1446]: time="2025-09-13T00:51:26.725642952Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:51:26.734435 env[1446]: time="2025-09-13T00:51:26.734374627Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:51:26.734435 env[1446]: time="2025-09-13T00:51:26.734410639Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:51:26.734435 env[1446]: time="2025-09-13T00:51:26.734434110Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:51:26.734435 env[1446]: time="2025-09-13T00:51:26.734445250Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:51:26.807851 env[1446]: time="2025-09-13T00:51:26.807759633Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:51:26.807851 env[1446]: time="2025-09-13T00:51:26.807807860Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:51:26.808272 env[1446]: time="2025-09-13T00:51:26.808130368Z" level=info msg="Loading containers: start." Sep 13 00:51:26.970950 kernel: Initializing XFRM netlink socket Sep 13 00:51:27.013187 env[1446]: time="2025-09-13T00:51:27.013139038Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:51:27.098384 systemd-networkd[1062]: docker0: Link UP Sep 13 00:51:27.122188 env[1446]: time="2025-09-13T00:51:27.122145503Z" level=info msg="Loading containers: done." Sep 13 00:51:27.141694 env[1446]: time="2025-09-13T00:51:27.141638485Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:51:27.141935 env[1446]: time="2025-09-13T00:51:27.141899709Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:51:27.142052 env[1446]: time="2025-09-13T00:51:27.142035075Z" level=info msg="Daemon has completed initialization" Sep 13 00:51:27.156578 systemd[1]: Started docker.service. Sep 13 00:51:27.167233 env[1446]: time="2025-09-13T00:51:27.167125540Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:51:27.191295 systemd[1]: Starting coreos-metadata.service... Sep 13 00:51:27.242200 coreos-metadata[1564]: Sep 13 00:51:27.242 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:51:27.253810 coreos-metadata[1564]: Sep 13 00:51:27.253 INFO Fetch successful Sep 13 00:51:27.268721 systemd[1]: Finished coreos-metadata.service. Sep 13 00:51:28.212656 env[1309]: time="2025-09-13T00:51:28.212599877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:51:28.706598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390051080.mount: Deactivated successfully. Sep 13 00:51:30.370078 env[1309]: time="2025-09-13T00:51:30.369984468Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:30.371850 env[1309]: time="2025-09-13T00:51:30.371791189Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:30.374557 env[1309]: time="2025-09-13T00:51:30.374491285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:30.377054 env[1309]: time="2025-09-13T00:51:30.377001424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:30.378110 env[1309]: time="2025-09-13T00:51:30.378054814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:51:30.378938 env[1309]: time="2025-09-13T00:51:30.378876330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:51:32.112009 env[1309]: time="2025-09-13T00:51:32.111861375Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:32.117942 env[1309]: time="2025-09-13T00:51:32.117873281Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:32.119980 env[1309]: time="2025-09-13T00:51:32.119938253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:32.122658 env[1309]: time="2025-09-13T00:51:32.122616622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:32.123866 env[1309]: time="2025-09-13T00:51:32.123822797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:51:32.124550 env[1309]: time="2025-09-13T00:51:32.124520637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:51:33.378389 env[1309]: time="2025-09-13T00:51:33.378329030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:33.381789 env[1309]: time="2025-09-13T00:51:33.381742895Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:33.383502 env[1309]: time="2025-09-13T00:51:33.383458149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:33.386030 env[1309]: time="2025-09-13T00:51:33.385986472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:33.387271 env[1309]: time="2025-09-13T00:51:33.387233227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:51:33.388140 env[1309]: time="2025-09-13T00:51:33.388096065Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:51:34.549419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556710906.mount: Deactivated successfully. Sep 13 00:51:35.460436 env[1309]: time="2025-09-13T00:51:35.460220669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:35.468672 env[1309]: time="2025-09-13T00:51:35.468606655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:35.481481 env[1309]: time="2025-09-13T00:51:35.481389902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:35.490985 env[1309]: time="2025-09-13T00:51:35.490440659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:35.498266 env[1309]: time="2025-09-13T00:51:35.492971638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:51:35.495041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:51:35.495716 systemd[1]: Stopped kubelet.service. Sep 13 00:51:35.500495 env[1309]: time="2025-09-13T00:51:35.500440322Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:51:35.500648 systemd[1]: Starting kubelet.service... Sep 13 00:51:35.697105 systemd[1]: Started kubelet.service. Sep 13 00:51:35.767206 kubelet[1592]: E0913 00:51:35.767133 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:35.770749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:35.770938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:35.971845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140338962.mount: Deactivated successfully. Sep 13 00:51:37.007758 env[1309]: time="2025-09-13T00:51:37.007695214Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:37.009628 env[1309]: time="2025-09-13T00:51:37.009570651Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:37.012047 env[1309]: time="2025-09-13T00:51:37.011996167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:37.019004 env[1309]: time="2025-09-13T00:51:37.018937275Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:51:37.019579 env[1309]: time="2025-09-13T00:51:37.019497387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:37.020311 env[1309]: time="2025-09-13T00:51:37.020268169Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:51:37.457947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592067106.mount: Deactivated successfully. Sep 13 00:51:37.462792 env[1309]: time="2025-09-13T00:51:37.462718941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:37.464082 env[1309]: time="2025-09-13T00:51:37.464035413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:37.465508 env[1309]: time="2025-09-13T00:51:37.465466896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:37.467008 env[1309]: time="2025-09-13T00:51:37.466961765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:37.467995 env[1309]: time="2025-09-13T00:51:37.467941758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:51:37.468731 env[1309]: time="2025-09-13T00:51:37.468706775Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:51:37.951541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980555584.mount: Deactivated successfully. Sep 13 00:51:40.359077 env[1309]: time="2025-09-13T00:51:40.359016834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:40.361697 env[1309]: time="2025-09-13T00:51:40.361647113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:40.364150 env[1309]: time="2025-09-13T00:51:40.364100438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:40.366811 env[1309]: time="2025-09-13T00:51:40.366762283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:40.368290 env[1309]: time="2025-09-13T00:51:40.368239533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:51:43.357531 systemd[1]: Stopped kubelet.service. Sep 13 00:51:43.360500 systemd[1]: Starting kubelet.service... Sep 13 00:51:43.409696 systemd[1]: Reloading. Sep 13 00:51:43.529651 /usr/lib/systemd/system-generators/torcx-generator[1644]: time="2025-09-13T00:51:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:51:43.529680 /usr/lib/systemd/system-generators/torcx-generator[1644]: time="2025-09-13T00:51:43Z" level=info msg="torcx already run" Sep 13 00:51:43.668991 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:51:43.669253 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:51:43.692297 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:51:43.807389 systemd[1]: Started kubelet.service. Sep 13 00:51:43.815996 systemd[1]: Stopping kubelet.service... Sep 13 00:51:43.819234 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:51:43.819488 systemd[1]: Stopped kubelet.service. Sep 13 00:51:43.821378 systemd[1]: Starting kubelet.service... Sep 13 00:51:43.959928 systemd[1]: Started kubelet.service. Sep 13 00:51:44.019855 kubelet[1713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:51:44.020387 kubelet[1713]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:51:44.020479 kubelet[1713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:51:44.020697 kubelet[1713]: I0913 00:51:44.020653 1713 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:51:44.261792 kubelet[1713]: I0913 00:51:44.261716 1713 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:51:44.261792 kubelet[1713]: I0913 00:51:44.261765 1713 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:51:44.262330 kubelet[1713]: I0913 00:51:44.262297 1713 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:51:44.303742 kubelet[1713]: E0913 00:51:44.303695 1713 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.110.227.187:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:44.309251 kubelet[1713]: I0913 00:51:44.309182 1713 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:51:44.321656 kubelet[1713]: E0913 00:51:44.321613 1713 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:51:44.321853 kubelet[1713]: I0913 00:51:44.321840 1713 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:51:44.328563 kubelet[1713]: I0913 00:51:44.328521 1713 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:51:44.330335 kubelet[1713]: I0913 00:51:44.330300 1713 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:51:44.330767 kubelet[1713]: I0913 00:51:44.330720 1713 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:51:44.331204 kubelet[1713]: I0913 00:51:44.330897 1713 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-8fedea5c61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:51:44.331408 kubelet[1713]: I0913 00:51:44.331393 1713 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:51:44.331496 kubelet[1713]: I0913 00:51:44.331485 1713 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:51:44.331732 kubelet[1713]: I0913 00:51:44.331715 1713 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:51:44.336294 kubelet[1713]: I0913 00:51:44.336251 1713 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:51:44.336537 kubelet[1713]: W0913 00:51:44.336466 1713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.227.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8fedea5c61&limit=500&resourceVersion=0": dial tcp 143.110.227.187:6443: connect: connection refused Sep 13 00:51:44.336662 kubelet[1713]: I0913 00:51:44.336519 1713 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:51:44.336796 kubelet[1713]: I0913 00:51:44.336783 1713 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:51:44.336954 kubelet[1713]: I0913 00:51:44.336942 1713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:51:44.339316 kubelet[1713]: E0913 00:51:44.339255 1713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.110.227.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8fedea5c61&limit=500&resourceVersion=0\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:44.341045 kubelet[1713]: W0913 00:51:44.340986 1713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.227.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.110.227.187:6443: connect: connection refused Sep 13 00:51:44.341252 kubelet[1713]: E0913 00:51:44.341229 1713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.110.227.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:44.341461 kubelet[1713]: I0913 00:51:44.341435 1713 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:51:44.342100 kubelet[1713]: I0913 00:51:44.342078 1713 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:51:44.342990 kubelet[1713]: W0913 00:51:44.342964 1713 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:51:44.346536 kubelet[1713]: I0913 00:51:44.346497 1713 server.go:1274] "Started kubelet" Sep 13 00:51:44.363599 kubelet[1713]: E0913 00:51:44.360680 1713 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.227.187:6443/api/v1/namespaces/default/events\": dial tcp 143.110.227.187:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-8fedea5c61.1864b14e477f7bda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-8fedea5c61,UID:ci-3510.3.8-n-8fedea5c61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-8fedea5c61,},FirstTimestamp:2025-09-13 00:51:44.346438618 +0000 UTC m=+0.374405050,LastTimestamp:2025-09-13 00:51:44.346438618 +0000 UTC m=+0.374405050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-8fedea5c61,}" Sep 13 00:51:44.367949 kubelet[1713]: E0913 00:51:44.367905 1713 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:51:44.368111 kubelet[1713]: I0913 00:51:44.368007 1713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:51:44.368601 kubelet[1713]: I0913 00:51:44.368574 1713 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:51:44.368717 kubelet[1713]: I0913 00:51:44.368684 1713 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:51:44.370219 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:51:44.370406 kubelet[1713]: I0913 00:51:44.370384 1713 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:51:44.370523 kubelet[1713]: I0913 00:51:44.370505 1713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:51:44.373046 kubelet[1713]: I0913 00:51:44.373018 1713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:51:44.374705 kubelet[1713]: I0913 00:51:44.374658 1713 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:51:44.375653 kubelet[1713]: E0913 00:51:44.375616 1713 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-8fedea5c61\" not found" Sep 13 00:51:44.376056 kubelet[1713]: I0913 00:51:44.376038 1713 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:51:44.376251 kubelet[1713]: I0913 00:51:44.376109 1713 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:51:44.376686 kubelet[1713]: E0913 00:51:44.376559 1713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.227.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8fedea5c61?timeout=10s\": dial tcp 143.110.227.187:6443: connect: connection refused" interval="200ms" Sep 13 00:51:44.377090 kubelet[1713]: I0913 00:51:44.377035 1713 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:51:44.377650 kubelet[1713]: I0913 00:51:44.377623 1713 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:51:44.378439 kubelet[1713]: W0913 00:51:44.378381 1713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.227.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.227.187:6443: connect: connection refused Sep 13 00:51:44.378521 kubelet[1713]: E0913 00:51:44.378451 1713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.110.227.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:44.380151 kubelet[1713]: I0913 00:51:44.380111 1713 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:51:44.406586 kubelet[1713]: I0913 00:51:44.406379 1713 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:51:44.406586 kubelet[1713]: I0913 00:51:44.406400 1713 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:51:44.406586 kubelet[1713]: I0913 00:51:44.406427 1713 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:51:44.407706 kubelet[1713]: I0913 00:51:44.407608 1713 policy_none.go:49] "None policy: Start" Sep 13 00:51:44.408937 kubelet[1713]: I0913 00:51:44.408889 1713 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:51:44.408937 kubelet[1713]: I0913 00:51:44.408934 1713 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:51:44.419834 kubelet[1713]: I0913 00:51:44.419795 1713 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:51:44.420072 kubelet[1713]: I0913 00:51:44.420021 1713 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:51:44.420197 kubelet[1713]: I0913 00:51:44.420038 1713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:51:44.422305 kubelet[1713]: I0913 00:51:44.421851 1713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:51:44.423319 kubelet[1713]: E0913 00:51:44.423290 1713 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-8fedea5c61\" not found" Sep 13 00:51:44.425017 kubelet[1713]: I0913 00:51:44.424960 1713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:51:44.427518 kubelet[1713]: I0913 00:51:44.427469 1713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:51:44.427652 kubelet[1713]: I0913 00:51:44.427529 1713 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:51:44.427652 kubelet[1713]: I0913 00:51:44.427579 1713 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:51:44.427652 kubelet[1713]: E0913 00:51:44.427637 1713 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:51:44.429049 kubelet[1713]: W0913 00:51:44.428996 1713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.227.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.227.187:6443: connect: connection refused Sep 13 00:51:44.429163 kubelet[1713]: E0913 00:51:44.429059 1713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.110.227.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:44.524288 kubelet[1713]: I0913 00:51:44.522283 1713 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.525130 kubelet[1713]: E0913 00:51:44.525073 1713 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.110.227.187:6443/api/v1/nodes\": dial tcp 143.110.227.187:6443: connect: connection refused" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.577655 kubelet[1713]: E0913 00:51:44.577606 1713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.227.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8fedea5c61?timeout=10s\": dial tcp 143.110.227.187:6443: connect: connection refused" interval="400ms" Sep 13 00:51:44.677361 kubelet[1713]: I0913 00:51:44.677290 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.677361 kubelet[1713]: I0913 00:51:44.677367 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.677618 kubelet[1713]: I0913 00:51:44.677396 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.677618 kubelet[1713]: I0913 00:51:44.677442 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29b9afa368f3f5c654c89f90939fa638-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" (UID: \"29b9afa368f3f5c654c89f90939fa638\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.677618 kubelet[1713]: I0913 00:51:44.677488 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29b9afa368f3f5c654c89f90939fa638-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" (UID: \"29b9afa368f3f5c654c89f90939fa638\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.677618 kubelet[1713]: I0913 00:51:44.677554 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.677618 kubelet[1713]: I0913 00:51:44.677601 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.677783 kubelet[1713]: I0913 00:51:44.677635 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b436a7d24d234cdc2c56aa28b25bc98-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-8fedea5c61\" (UID: \"8b436a7d24d234cdc2c56aa28b25bc98\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.677783 kubelet[1713]: I0913 00:51:44.677672 1713 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29b9afa368f3f5c654c89f90939fa638-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" (UID: \"29b9afa368f3f5c654c89f90939fa638\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.726695 kubelet[1713]: I0913 00:51:44.726644 1713 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.727170 kubelet[1713]: E0913 00:51:44.727140 1713 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.110.227.187:6443/api/v1/nodes\": dial tcp 143.110.227.187:6443: connect: connection refused" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:44.836998 kubelet[1713]: E0913 00:51:44.836328 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:44.837282 kubelet[1713]: E0913 00:51:44.836383 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:44.838368 env[1309]: time="2025-09-13T00:51:44.838051632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-8fedea5c61,Uid:8b436a7d24d234cdc2c56aa28b25bc98,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:44.838368 env[1309]: time="2025-09-13T00:51:44.838143993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-8fedea5c61,Uid:e7325f5a390b2e1b81ce77628750d5a6,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:44.838865 env[1309]: time="2025-09-13T00:51:44.838826112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-8fedea5c61,Uid:29b9afa368f3f5c654c89f90939fa638,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:44.838937 kubelet[1713]: E0913 00:51:44.838407 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:44.978964 kubelet[1713]: E0913 00:51:44.978887 1713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.227.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8fedea5c61?timeout=10s\": dial tcp 143.110.227.187:6443: connect: connection refused" interval="800ms" Sep 13 00:51:45.129216 kubelet[1713]: I0913 00:51:45.128834 1713 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:45.129610 kubelet[1713]: E0913 00:51:45.129322 1713 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.110.227.187:6443/api/v1/nodes\": dial tcp 143.110.227.187:6443: connect: connection refused" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:45.223040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152270328.mount: Deactivated successfully. Sep 13 00:51:45.226334 env[1309]: time="2025-09-13T00:51:45.226289787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.228363 env[1309]: time="2025-09-13T00:51:45.228321925Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.229411 env[1309]: time="2025-09-13T00:51:45.229367513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.230021 env[1309]: time="2025-09-13T00:51:45.229988723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.230580 env[1309]: time="2025-09-13T00:51:45.230553918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.231259 env[1309]: time="2025-09-13T00:51:45.231231661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.233835 env[1309]: time="2025-09-13T00:51:45.233799617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.237035 env[1309]: time="2025-09-13T00:51:45.236997271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.238106 env[1309]: time="2025-09-13T00:51:45.238063813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.238722 env[1309]: time="2025-09-13T00:51:45.238691022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.239355 env[1309]: time="2025-09-13T00:51:45.239314148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.251788 env[1309]: time="2025-09-13T00:51:45.251724006Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.268992 env[1309]: time="2025-09-13T00:51:45.262503292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:45.268992 env[1309]: time="2025-09-13T00:51:45.262545355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:45.268992 env[1309]: time="2025-09-13T00:51:45.262556269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:45.268992 env[1309]: time="2025-09-13T00:51:45.264061721Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b7ca9b74e607dcbf808117f2a292b2a64dd78f1cdfe9c0ab9ebfec7f4877a85 pid=1753 runtime=io.containerd.runc.v2 Sep 13 00:51:45.282146 env[1309]: time="2025-09-13T00:51:45.282050948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:45.283707 env[1309]: time="2025-09-13T00:51:45.283656558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:45.283882 env[1309]: time="2025-09-13T00:51:45.283857623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:45.284269 env[1309]: time="2025-09-13T00:51:45.284226999Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b87ec897cfa18758d5cd1b490fe992e6b2b945b5e2cc07245e986b4c0d4329a9 pid=1772 runtime=io.containerd.runc.v2 Sep 13 00:51:45.292825 env[1309]: time="2025-09-13T00:51:45.292739376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:45.293101 env[1309]: time="2025-09-13T00:51:45.293063301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:45.293249 env[1309]: time="2025-09-13T00:51:45.293223357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:45.293730 env[1309]: time="2025-09-13T00:51:45.293688367Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f481603b7b5ab64280aa04dc04ef0289f4f28992786504ac31b5c4504d48933 pid=1797 runtime=io.containerd.runc.v2 Sep 13 00:51:45.374698 env[1309]: time="2025-09-13T00:51:45.372749371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-8fedea5c61,Uid:e7325f5a390b2e1b81ce77628750d5a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b7ca9b74e607dcbf808117f2a292b2a64dd78f1cdfe9c0ab9ebfec7f4877a85\"" Sep 13 00:51:45.374856 kubelet[1713]: E0913 00:51:45.374126 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:45.382302 env[1309]: time="2025-09-13T00:51:45.380736214Z" level=info msg="CreateContainer within sandbox \"6b7ca9b74e607dcbf808117f2a292b2a64dd78f1cdfe9c0ab9ebfec7f4877a85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:51:45.413979 kubelet[1713]: W0913 00:51:45.413877 1713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.227.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8fedea5c61&limit=500&resourceVersion=0": dial tcp 143.110.227.187:6443: connect: connection refused Sep 13 00:51:45.414183 kubelet[1713]: E0913 00:51:45.414013 1713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.110.227.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8fedea5c61&limit=500&resourceVersion=0\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:45.414455 env[1309]: time="2025-09-13T00:51:45.414403248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-8fedea5c61,Uid:29b9afa368f3f5c654c89f90939fa638,Namespace:kube-system,Attempt:0,} returns sandbox id \"b87ec897cfa18758d5cd1b490fe992e6b2b945b5e2cc07245e986b4c0d4329a9\"" Sep 13 00:51:45.415493 kubelet[1713]: E0913 00:51:45.415459 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:45.428354 env[1309]: time="2025-09-13T00:51:45.428295602Z" level=info msg="CreateContainer within sandbox \"b87ec897cfa18758d5cd1b490fe992e6b2b945b5e2cc07245e986b4c0d4329a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:51:45.439464 env[1309]: time="2025-09-13T00:51:45.439404397Z" level=info msg="CreateContainer within sandbox \"6b7ca9b74e607dcbf808117f2a292b2a64dd78f1cdfe9c0ab9ebfec7f4877a85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be14230704ce558afa5ebe5f3e5bb9b9bb76cf2fe91be38663c4acf5849a7c74\"" Sep 13 00:51:45.445354 env[1309]: time="2025-09-13T00:51:45.445295309Z" level=info msg="StartContainer for \"be14230704ce558afa5ebe5f3e5bb9b9bb76cf2fe91be38663c4acf5849a7c74\"" Sep 13 00:51:45.452758 env[1309]: time="2025-09-13T00:51:45.452688715Z" level=info msg="CreateContainer within sandbox \"b87ec897cfa18758d5cd1b490fe992e6b2b945b5e2cc07245e986b4c0d4329a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"497b2f93881e08811068889accdba6096b3da5658bae19ecdb2b9cc3ebf16655\"" Sep 13 00:51:45.453718 env[1309]: time="2025-09-13T00:51:45.453673525Z" level=info msg="StartContainer for \"497b2f93881e08811068889accdba6096b3da5658bae19ecdb2b9cc3ebf16655\"" Sep 13 00:51:45.460506 env[1309]: time="2025-09-13T00:51:45.460452822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-8fedea5c61,Uid:8b436a7d24d234cdc2c56aa28b25bc98,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f481603b7b5ab64280aa04dc04ef0289f4f28992786504ac31b5c4504d48933\"" Sep 13 00:51:45.461839 kubelet[1713]: E0913 00:51:45.461793 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:45.466370 env[1309]: time="2025-09-13T00:51:45.466330532Z" level=info msg="CreateContainer within sandbox \"4f481603b7b5ab64280aa04dc04ef0289f4f28992786504ac31b5c4504d48933\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:51:45.475008 env[1309]: time="2025-09-13T00:51:45.474873558Z" level=info msg="CreateContainer within sandbox \"4f481603b7b5ab64280aa04dc04ef0289f4f28992786504ac31b5c4504d48933\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a305aa80637d5abdf54632920f42ef139f6c46ac364b3e66599754cbd81e3c9c\"" Sep 13 00:51:45.475995 env[1309]: time="2025-09-13T00:51:45.475902189Z" level=info msg="StartContainer for \"a305aa80637d5abdf54632920f42ef139f6c46ac364b3e66599754cbd81e3c9c\"" Sep 13 00:51:45.487663 kubelet[1713]: W0913 00:51:45.487587 1713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.227.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.227.187:6443: connect: connection refused Sep 13 00:51:45.487898 kubelet[1713]: E0913 00:51:45.487669 1713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.110.227.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:45.516426 kubelet[1713]: W0913 00:51:45.516287 1713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.227.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.110.227.187:6443: connect: connection refused Sep 13 00:51:45.516426 kubelet[1713]: E0913 00:51:45.516386 1713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.110.227.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:45.591066 env[1309]: time="2025-09-13T00:51:45.591013971Z" level=info msg="StartContainer for \"497b2f93881e08811068889accdba6096b3da5658bae19ecdb2b9cc3ebf16655\" returns successfully" Sep 13 00:51:45.609359 env[1309]: time="2025-09-13T00:51:45.609292722Z" level=info msg="StartContainer for \"be14230704ce558afa5ebe5f3e5bb9b9bb76cf2fe91be38663c4acf5849a7c74\" returns successfully" Sep 13 00:51:45.630142 env[1309]: time="2025-09-13T00:51:45.630096643Z" level=info msg="StartContainer for \"a305aa80637d5abdf54632920f42ef139f6c46ac364b3e66599754cbd81e3c9c\" returns successfully" Sep 13 00:51:45.696255 kubelet[1713]: W0913 00:51:45.696031 1713 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.227.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.227.187:6443: connect: connection refused Sep 13 00:51:45.696255 kubelet[1713]: E0913 00:51:45.696195 1713 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.110.227.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.227.187:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:45.780356 kubelet[1713]: E0913 00:51:45.780293 1713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.227.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8fedea5c61?timeout=10s\": dial tcp 143.110.227.187:6443: connect: connection refused" interval="1.6s" Sep 13 00:51:45.930691 kubelet[1713]: I0913 00:51:45.930643 1713 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:45.931157 kubelet[1713]: E0913 00:51:45.931125 1713 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.110.227.187:6443/api/v1/nodes\": dial tcp 143.110.227.187:6443: connect: connection refused" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:46.452822 kubelet[1713]: E0913 00:51:46.452787 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:46.455562 kubelet[1713]: E0913 00:51:46.455529 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:46.456602 kubelet[1713]: E0913 00:51:46.456573 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:47.459098 kubelet[1713]: E0913 00:51:47.459066 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:47.460101 kubelet[1713]: E0913 00:51:47.460061 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:47.533297 kubelet[1713]: I0913 00:51:47.533266 1713 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:47.696492 kubelet[1713]: E0913 00:51:47.696450 1713 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-8fedea5c61\" not found" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:47.858131 kubelet[1713]: I0913 00:51:47.858083 1713 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:48.254592 kubelet[1713]: E0913 00:51:48.254546 1713 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.8-n-8fedea5c61\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:48.254880 kubelet[1713]: E0913 00:51:48.254858 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:48.358849 kubelet[1713]: I0913 00:51:48.358798 1713 apiserver.go:52] "Watching apiserver" Sep 13 00:51:48.377103 kubelet[1713]: I0913 00:51:48.377060 1713 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:51:48.462432 kubelet[1713]: E0913 00:51:48.462349 1713 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:48.462999 kubelet[1713]: E0913 00:51:48.462603 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:49.527251 kubelet[1713]: W0913 00:51:49.527209 1713 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:51:49.527704 kubelet[1713]: E0913 00:51:49.527454 1713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:49.923133 systemd[1]: Reloading. Sep 13 00:51:50.036760 /usr/lib/systemd/system-generators/torcx-generator[2005]: time="2025-09-13T00:51:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:51:50.036806 /usr/lib/systemd/system-generators/torcx-generator[2005]: time="2025-09-13T00:51:50Z" level=info msg="torcx already run" Sep 13 00:51:50.164558 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:51:50.164782 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:51:50.197816 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:51:50.334449 systemd[1]: Stopping kubelet.service... Sep 13 00:51:50.358656 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:51:50.359010 systemd[1]: Stopped kubelet.service. Sep 13 00:51:50.361413 systemd[1]: Starting kubelet.service... Sep 13 00:51:51.392603 systemd[1]: Started kubelet.service. Sep 13 00:51:51.498950 kubelet[2066]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:51:51.498950 kubelet[2066]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:51:51.498950 kubelet[2066]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:51:51.498950 kubelet[2066]: I0913 00:51:51.497947 2066 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:51:51.507593 kubelet[2066]: I0913 00:51:51.507549 2066 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:51:51.507593 kubelet[2066]: I0913 00:51:51.507582 2066 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:51:51.508439 kubelet[2066]: I0913 00:51:51.508401 2066 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:51:51.509829 kubelet[2066]: I0913 00:51:51.509778 2066 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:51:51.524182 kubelet[2066]: I0913 00:51:51.524139 2066 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:51:51.529697 sudo[2081]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:51:51.530473 sudo[2081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:51:51.532758 kubelet[2066]: E0913 00:51:51.532708 2066 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:51:51.532890 kubelet[2066]: I0913 00:51:51.532764 2066 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:51:51.536184 kubelet[2066]: I0913 00:51:51.536147 2066 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:51:51.536648 kubelet[2066]: I0913 00:51:51.536628 2066 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:51:51.536771 kubelet[2066]: I0913 00:51:51.536734 2066 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:51:51.537109 kubelet[2066]: I0913 00:51:51.536770 2066 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-8fedea5c61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:51:51.537109 kubelet[2066]: I0913 00:51:51.536999 2066 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:51:51.537109 kubelet[2066]: I0913 00:51:51.537012 2066 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:51:51.537109 kubelet[2066]: I0913 00:51:51.537050 2066 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:51:51.537323 kubelet[2066]: I0913 00:51:51.537182 2066 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:51:51.537323 kubelet[2066]: I0913 00:51:51.537196 2066 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:51:51.537792 kubelet[2066]: I0913 00:51:51.537628 2066 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:51:51.537792 kubelet[2066]: I0913 00:51:51.537648 2066 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:51:51.551278 kubelet[2066]: I0913 00:51:51.549325 2066 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:51:51.551278 kubelet[2066]: I0913 00:51:51.549868 2066 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:51:51.551278 kubelet[2066]: I0913 00:51:51.550399 2066 server.go:1274] "Started kubelet" Sep 13 00:51:51.561947 kubelet[2066]: I0913 00:51:51.561459 2066 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:51:51.577775 kubelet[2066]: I0913 00:51:51.576324 2066 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:51:51.586452 kubelet[2066]: I0913 00:51:51.586134 2066 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:51:51.587590 kubelet[2066]: I0913 00:51:51.587312 2066 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:51:51.587590 kubelet[2066]: I0913 00:51:51.587522 2066 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:51:51.587786 kubelet[2066]: I0913 00:51:51.587768 2066 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:51:51.594788 kubelet[2066]: I0913 00:51:51.594740 2066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:51:51.598966 kubelet[2066]: I0913 00:51:51.595836 2066 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:51:51.598966 kubelet[2066]: E0913 00:51:51.596204 2066 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-8fedea5c61\" not found" Sep 13 00:51:51.598966 kubelet[2066]: I0913 00:51:51.596663 2066 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:51:51.598966 kubelet[2066]: I0913 00:51:51.596782 2066 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:51:51.607494 kubelet[2066]: I0913 00:51:51.601776 2066 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:51:51.607494 kubelet[2066]: I0913 00:51:51.602557 2066 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:51:51.612817 kubelet[2066]: I0913 00:51:51.612776 2066 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:51:51.634038 kubelet[2066]: I0913 00:51:51.633892 2066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:51:51.636757 kubelet[2066]: I0913 00:51:51.636722 2066 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:51:51.636970 kubelet[2066]: I0913 00:51:51.636776 2066 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:51:51.636970 kubelet[2066]: E0913 00:51:51.636833 2066 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:51:51.717688 kubelet[2066]: I0913 00:51:51.717585 2066 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:51:51.717688 kubelet[2066]: I0913 00:51:51.717615 2066 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:51:51.717688 kubelet[2066]: I0913 00:51:51.717637 2066 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:51:51.719274 kubelet[2066]: I0913 00:51:51.719247 2066 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:51:51.719369 kubelet[2066]: I0913 00:51:51.719270 2066 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:51:51.719369 kubelet[2066]: I0913 00:51:51.719291 2066 policy_none.go:49] "None policy: Start" Sep 13 00:51:51.720204 kubelet[2066]: I0913 00:51:51.720182 2066 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:51:51.720204 kubelet[2066]: I0913 00:51:51.720207 2066 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:51:51.720367 kubelet[2066]: I0913 00:51:51.720355 2066 state_mem.go:75] "Updated machine memory state" Sep 13 00:51:51.721629 kubelet[2066]: I0913 00:51:51.721602 2066 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:51:51.721810 kubelet[2066]: I0913 00:51:51.721788 2066 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:51:51.721951 kubelet[2066]: I0913 00:51:51.721807 2066 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:51:51.725028 kubelet[2066]: I0913 00:51:51.723575 2066 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:51:51.762554 kubelet[2066]: W0913 00:51:51.762518 2066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:51:51.763265 kubelet[2066]: W0913 00:51:51.763241 2066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:51:51.763593 kubelet[2066]: W0913 00:51:51.763420 2066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:51:51.763701 kubelet[2066]: E0913 00:51:51.763605 2066 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.804792 kubelet[2066]: I0913 00:51:51.804750 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.805040 kubelet[2066]: I0913 00:51:51.805020 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.805145 kubelet[2066]: I0913 00:51:51.805129 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.805221 kubelet[2066]: I0913 00:51:51.805207 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29b9afa368f3f5c654c89f90939fa638-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" (UID: \"29b9afa368f3f5c654c89f90939fa638\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.805316 kubelet[2066]: I0913 00:51:51.805301 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29b9afa368f3f5c654c89f90939fa638-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" (UID: \"29b9afa368f3f5c654c89f90939fa638\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.805405 kubelet[2066]: I0913 00:51:51.805388 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.805482 kubelet[2066]: I0913 00:51:51.805469 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7325f5a390b2e1b81ce77628750d5a6-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-8fedea5c61\" (UID: \"e7325f5a390b2e1b81ce77628750d5a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.805835 kubelet[2066]: I0913 00:51:51.805754 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b436a7d24d234cdc2c56aa28b25bc98-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-8fedea5c61\" (UID: \"8b436a7d24d234cdc2c56aa28b25bc98\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.806008 kubelet[2066]: I0913 00:51:51.805982 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29b9afa368f3f5c654c89f90939fa638-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" (UID: \"29b9afa368f3f5c654c89f90939fa638\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.832015 kubelet[2066]: I0913 00:51:51.831979 2066 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.847372 kubelet[2066]: I0913 00:51:51.847329 2066 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:51.847719 kubelet[2066]: I0913 00:51:51.847700 2066 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:52.063886 kubelet[2066]: E0913 00:51:52.063850 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:52.064431 kubelet[2066]: E0913 00:51:52.064296 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:52.064583 kubelet[2066]: E0913 00:51:52.064347 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:52.331861 sudo[2081]: pam_unix(sudo:session): session closed for user root Sep 13 00:51:52.543447 kubelet[2066]: I0913 00:51:52.543405 2066 apiserver.go:52] "Watching apiserver" Sep 13 00:51:52.597234 kubelet[2066]: I0913 00:51:52.597089 2066 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:51:52.667207 kubelet[2066]: E0913 00:51:52.667177 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:52.668073 kubelet[2066]: E0913 00:51:52.667831 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:52.687199 kubelet[2066]: W0913 00:51:52.687158 2066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:51:52.687505 kubelet[2066]: E0913 00:51:52.687480 2066 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-8fedea5c61\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" Sep 13 00:51:52.687790 kubelet[2066]: E0913 00:51:52.687773 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:52.762464 kubelet[2066]: I0913 00:51:52.762394 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-8fedea5c61" podStartSLOduration=1.762361647 podStartE2EDuration="1.762361647s" podCreationTimestamp="2025-09-13 00:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:51:52.74631477 +0000 UTC m=+1.329167404" watchObservedRunningTime="2025-09-13 00:51:52.762361647 +0000 UTC m=+1.345214275" Sep 13 00:51:52.799006 kubelet[2066]: I0913 00:51:52.798934 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8fedea5c61" podStartSLOduration=3.7988759869999997 podStartE2EDuration="3.798875987s" podCreationTimestamp="2025-09-13 00:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:51:52.762795517 +0000 UTC m=+1.345648132" watchObservedRunningTime="2025-09-13 00:51:52.798875987 +0000 UTC m=+1.381728616" Sep 13 00:51:53.669213 kubelet[2066]: E0913 00:51:53.669170 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:53.670851 kubelet[2066]: E0913 00:51:53.670808 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:53.802739 kubelet[2066]: E0913 00:51:53.802694 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:54.093198 sudo[1436]: pam_unix(sudo:session): session closed for user root Sep 13 00:51:54.097530 sshd[1430]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:54.102523 systemd[1]: sshd@4-143.110.227.187:22-147.75.109.163:43026.service: Deactivated successfully. Sep 13 00:51:54.104659 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:51:54.105023 systemd-logind[1297]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:51:54.106276 systemd-logind[1297]: Removed session 5. Sep 13 00:51:55.248420 kubelet[2066]: I0913 00:51:55.248344 2066 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:51:55.249019 env[1309]: time="2025-09-13T00:51:55.248846072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:51:55.249527 kubelet[2066]: I0913 00:51:55.249499 2066 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:51:56.134150 kubelet[2066]: I0913 00:51:56.134087 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8fedea5c61" podStartSLOduration=5.134060186 podStartE2EDuration="5.134060186s" podCreationTimestamp="2025-09-13 00:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:51:52.79962644 +0000 UTC m=+1.382479078" watchObservedRunningTime="2025-09-13 00:51:56.134060186 +0000 UTC m=+4.716912822" Sep 13 00:51:56.147518 kubelet[2066]: W0913 00:51:56.147470 2066 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.8-n-8fedea5c61" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-8fedea5c61' and this object Sep 13 00:51:56.147518 kubelet[2066]: E0913 00:51:56.147519 2066 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510.3.8-n-8fedea5c61\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-8fedea5c61' and this object" logger="UnhandledError" Sep 13 00:51:56.241860 kubelet[2066]: I0913 00:51:56.241734 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33ca17c3-8f4f-493d-b950-93d2b307d69f-kube-proxy\") pod \"kube-proxy-ms4sd\" (UID: \"33ca17c3-8f4f-493d-b950-93d2b307d69f\") " pod="kube-system/kube-proxy-ms4sd" Sep 13 00:51:56.241860 kubelet[2066]: I0913 00:51:56.241870 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-hostproc\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242113 kubelet[2066]: I0913 00:51:56.241898 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-hubble-tls\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242113 kubelet[2066]: I0913 00:51:56.241972 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-lib-modules\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242113 kubelet[2066]: I0913 00:51:56.241991 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-host-proc-sys-kernel\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242113 kubelet[2066]: I0913 00:51:56.242007 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33ca17c3-8f4f-493d-b950-93d2b307d69f-xtables-lock\") pod \"kube-proxy-ms4sd\" (UID: \"33ca17c3-8f4f-493d-b950-93d2b307d69f\") " pod="kube-system/kube-proxy-ms4sd" Sep 13 00:51:56.242113 kubelet[2066]: I0913 00:51:56.242044 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfds9\" (UniqueName: \"kubernetes.io/projected/33ca17c3-8f4f-493d-b950-93d2b307d69f-kube-api-access-hfds9\") pod \"kube-proxy-ms4sd\" (UID: \"33ca17c3-8f4f-493d-b950-93d2b307d69f\") " pod="kube-system/kube-proxy-ms4sd" Sep 13 00:51:56.242259 kubelet[2066]: I0913 00:51:56.242065 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-run\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242259 kubelet[2066]: I0913 00:51:56.242124 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-clustermesh-secrets\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242259 kubelet[2066]: I0913 00:51:56.242148 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-config-path\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242259 kubelet[2066]: I0913 00:51:56.242208 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cni-path\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242259 kubelet[2066]: I0913 00:51:56.242234 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwncc\" (UniqueName: \"kubernetes.io/projected/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-kube-api-access-pwncc\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242466 kubelet[2066]: I0913 00:51:56.242275 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33ca17c3-8f4f-493d-b950-93d2b307d69f-lib-modules\") pod \"kube-proxy-ms4sd\" (UID: \"33ca17c3-8f4f-493d-b950-93d2b307d69f\") " pod="kube-system/kube-proxy-ms4sd" Sep 13 00:51:56.242466 kubelet[2066]: I0913 00:51:56.242298 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-bpf-maps\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242466 kubelet[2066]: I0913 00:51:56.242348 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-etc-cni-netd\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242466 kubelet[2066]: I0913 00:51:56.242370 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-host-proc-sys-net\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242466 kubelet[2066]: I0913 00:51:56.242386 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-cgroup\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.242466 kubelet[2066]: I0913 00:51:56.242448 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-xtables-lock\") pod \"cilium-8h9cj\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " pod="kube-system/cilium-8h9cj" Sep 13 00:51:56.343792 kubelet[2066]: I0913 00:51:56.343752 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22b8e83a-501b-47e5-a4ee-f0a4529e69fd-cilium-config-path\") pod \"cilium-operator-5d85765b45-7grbj\" (UID: \"22b8e83a-501b-47e5-a4ee-f0a4529e69fd\") " pod="kube-system/cilium-operator-5d85765b45-7grbj" Sep 13 00:51:56.344504 kubelet[2066]: I0913 00:51:56.344480 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vjm\" (UniqueName: \"kubernetes.io/projected/22b8e83a-501b-47e5-a4ee-f0a4529e69fd-kube-api-access-v5vjm\") pod \"cilium-operator-5d85765b45-7grbj\" (UID: \"22b8e83a-501b-47e5-a4ee-f0a4529e69fd\") " pod="kube-system/cilium-operator-5d85765b45-7grbj" Sep 13 00:51:56.345052 kubelet[2066]: I0913 00:51:56.345024 2066 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:51:56.439026 kubelet[2066]: E0913 00:51:56.438869 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:56.440261 env[1309]: time="2025-09-13T00:51:56.439641452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8h9cj,Uid:7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:56.462945 env[1309]: time="2025-09-13T00:51:56.462290569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:56.462945 env[1309]: time="2025-09-13T00:51:56.462393258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:56.462945 env[1309]: time="2025-09-13T00:51:56.462418909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:56.462945 env[1309]: time="2025-09-13T00:51:56.462605159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4 pid=2145 runtime=io.containerd.runc.v2 Sep 13 00:51:56.526088 env[1309]: time="2025-09-13T00:51:56.526034632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8h9cj,Uid:7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\"" Sep 13 00:51:56.527649 kubelet[2066]: E0913 00:51:56.527212 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:56.531429 env[1309]: time="2025-09-13T00:51:56.530793331Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:51:56.625184 kubelet[2066]: E0913 00:51:56.625138 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:56.627754 env[1309]: time="2025-09-13T00:51:56.627677412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7grbj,Uid:22b8e83a-501b-47e5-a4ee-f0a4529e69fd,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:56.643688 env[1309]: time="2025-09-13T00:51:56.643534287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:56.643937 env[1309]: time="2025-09-13T00:51:56.643704963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:56.643937 env[1309]: time="2025-09-13T00:51:56.643776020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:56.644265 env[1309]: time="2025-09-13T00:51:56.644112212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31 pid=2188 runtime=io.containerd.runc.v2 Sep 13 00:51:56.722176 env[1309]: time="2025-09-13T00:51:56.720074604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7grbj,Uid:22b8e83a-501b-47e5-a4ee-f0a4529e69fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\"" Sep 13 00:51:56.724946 kubelet[2066]: E0913 00:51:56.723613 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:57.342983 kubelet[2066]: E0913 00:51:57.342945 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:57.344970 env[1309]: time="2025-09-13T00:51:57.344535114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ms4sd,Uid:33ca17c3-8f4f-493d-b950-93d2b307d69f,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:57.372793 env[1309]: time="2025-09-13T00:51:57.372620614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:57.372793 env[1309]: time="2025-09-13T00:51:57.372788289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:57.373022 env[1309]: time="2025-09-13T00:51:57.372815574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:57.373206 env[1309]: time="2025-09-13T00:51:57.373152667Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4d5d7a97d2b51622602a0607e31d70fd0f54bc55fe58d4ee21d7cc336e4582f pid=2229 runtime=io.containerd.runc.v2 Sep 13 00:51:57.395180 systemd[1]: run-containerd-runc-k8s.io-d4d5d7a97d2b51622602a0607e31d70fd0f54bc55fe58d4ee21d7cc336e4582f-runc.sbeVpv.mount: Deactivated successfully. Sep 13 00:51:57.433021 env[1309]: time="2025-09-13T00:51:57.432880987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ms4sd,Uid:33ca17c3-8f4f-493d-b950-93d2b307d69f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4d5d7a97d2b51622602a0607e31d70fd0f54bc55fe58d4ee21d7cc336e4582f\"" Sep 13 00:51:57.433854 kubelet[2066]: E0913 00:51:57.433825 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:57.438307 env[1309]: time="2025-09-13T00:51:57.438253713Z" level=info msg="CreateContainer within sandbox \"d4d5d7a97d2b51622602a0607e31d70fd0f54bc55fe58d4ee21d7cc336e4582f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:51:57.451402 env[1309]: time="2025-09-13T00:51:57.451348721Z" level=info msg="CreateContainer within sandbox \"d4d5d7a97d2b51622602a0607e31d70fd0f54bc55fe58d4ee21d7cc336e4582f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d35f10a4c6a6fc6f201cb18d4fa7e0bca9bb3323564f52098873a75780c88048\"" Sep 13 00:51:57.454136 env[1309]: time="2025-09-13T00:51:57.452846975Z" level=info msg="StartContainer for \"d35f10a4c6a6fc6f201cb18d4fa7e0bca9bb3323564f52098873a75780c88048\"" Sep 13 00:51:57.520172 env[1309]: time="2025-09-13T00:51:57.520124684Z" level=info msg="StartContainer for \"d35f10a4c6a6fc6f201cb18d4fa7e0bca9bb3323564f52098873a75780c88048\" returns successfully" Sep 13 00:51:57.685687 kubelet[2066]: E0913 00:51:57.685279 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:59.040618 kubelet[2066]: E0913 00:51:59.040086 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:51:59.054340 kubelet[2066]: I0913 00:51:59.054224 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ms4sd" podStartSLOduration=3.054205547 podStartE2EDuration="3.054205547s" podCreationTimestamp="2025-09-13 00:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:51:57.697817877 +0000 UTC m=+6.280670534" watchObservedRunningTime="2025-09-13 00:51:59.054205547 +0000 UTC m=+7.637058183" Sep 13 00:51:59.694548 kubelet[2066]: E0913 00:51:59.694274 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:01.569419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294582592.mount: Deactivated successfully. Sep 13 00:52:02.798466 kubelet[2066]: E0913 00:52:02.798418 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:03.823971 kubelet[2066]: E0913 00:52:03.823450 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:05.069197 env[1309]: time="2025-09-13T00:52:05.069141106Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:05.071080 env[1309]: time="2025-09-13T00:52:05.071016591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:05.072582 env[1309]: time="2025-09-13T00:52:05.072541811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:05.073244 env[1309]: time="2025-09-13T00:52:05.073211834Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:52:05.075308 env[1309]: time="2025-09-13T00:52:05.075257760Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:52:05.076906 env[1309]: time="2025-09-13T00:52:05.076857045Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:52:05.099684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252433576.mount: Deactivated successfully. Sep 13 00:52:05.114813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70048336.mount: Deactivated successfully. Sep 13 00:52:05.118777 env[1309]: time="2025-09-13T00:52:05.118721688Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\"" Sep 13 00:52:05.121015 env[1309]: time="2025-09-13T00:52:05.120567044Z" level=info msg="StartContainer for \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\"" Sep 13 00:52:05.193986 env[1309]: time="2025-09-13T00:52:05.190237621Z" level=info msg="StartContainer for \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\" returns successfully" Sep 13 00:52:05.256975 env[1309]: time="2025-09-13T00:52:05.256922424Z" level=info msg="shim disconnected" id=21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e Sep 13 00:52:05.257263 env[1309]: time="2025-09-13T00:52:05.257240694Z" level=warning msg="cleaning up after shim disconnected" id=21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e namespace=k8s.io Sep 13 00:52:05.257346 env[1309]: time="2025-09-13T00:52:05.257331198Z" level=info msg="cleaning up dead shim" Sep 13 00:52:05.268139 env[1309]: time="2025-09-13T00:52:05.268082349Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2476 runtime=io.containerd.runc.v2\n" Sep 13 00:52:05.709611 kubelet[2066]: E0913 00:52:05.709541 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:05.732120 env[1309]: time="2025-09-13T00:52:05.730526363Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:52:05.744557 env[1309]: time="2025-09-13T00:52:05.744492533Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\"" Sep 13 00:52:05.747097 env[1309]: time="2025-09-13T00:52:05.745474545Z" level=info msg="StartContainer for \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\"" Sep 13 00:52:05.804096 env[1309]: time="2025-09-13T00:52:05.804044837Z" level=info msg="StartContainer for \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\" returns successfully" Sep 13 00:52:05.820212 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:52:05.820815 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:52:05.822193 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:52:05.830021 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:52:05.846850 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:52:05.854099 env[1309]: time="2025-09-13T00:52:05.854029148Z" level=info msg="shim disconnected" id=47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790 Sep 13 00:52:05.854099 env[1309]: time="2025-09-13T00:52:05.854082871Z" level=warning msg="cleaning up after shim disconnected" id=47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790 namespace=k8s.io Sep 13 00:52:05.854099 env[1309]: time="2025-09-13T00:52:05.854094553Z" level=info msg="cleaning up dead shim" Sep 13 00:52:05.865275 env[1309]: time="2025-09-13T00:52:05.865228419Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2541 runtime=io.containerd.runc.v2\n" Sep 13 00:52:06.096984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e-rootfs.mount: Deactivated successfully. Sep 13 00:52:06.439436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2636051038.mount: Deactivated successfully. Sep 13 00:52:06.714323 kubelet[2066]: E0913 00:52:06.712996 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:06.720018 env[1309]: time="2025-09-13T00:52:06.719612107Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:52:06.737985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921329521.mount: Deactivated successfully. Sep 13 00:52:06.752318 env[1309]: time="2025-09-13T00:52:06.752266934Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\"" Sep 13 00:52:06.754718 env[1309]: time="2025-09-13T00:52:06.754677548Z" level=info msg="StartContainer for \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\"" Sep 13 00:52:06.836995 env[1309]: time="2025-09-13T00:52:06.836025183Z" level=info msg="StartContainer for \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\" returns successfully" Sep 13 00:52:06.865667 env[1309]: time="2025-09-13T00:52:06.865607487Z" level=info msg="shim disconnected" id=7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c Sep 13 00:52:06.865667 env[1309]: time="2025-09-13T00:52:06.865657713Z" level=warning msg="cleaning up after shim disconnected" id=7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c namespace=k8s.io Sep 13 00:52:06.865667 env[1309]: time="2025-09-13T00:52:06.865666563Z" level=info msg="cleaning up dead shim" Sep 13 00:52:06.875805 env[1309]: time="2025-09-13T00:52:06.875722004Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2601 runtime=io.containerd.runc.v2\n" Sep 13 00:52:07.226069 env[1309]: time="2025-09-13T00:52:07.226017537Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:07.227523 env[1309]: time="2025-09-13T00:52:07.227475092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:07.229127 env[1309]: time="2025-09-13T00:52:07.229087983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:07.229828 env[1309]: time="2025-09-13T00:52:07.229790384Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:52:07.234531 env[1309]: time="2025-09-13T00:52:07.234469561Z" level=info msg="CreateContainer within sandbox \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:52:07.244982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736379067.mount: Deactivated successfully. Sep 13 00:52:07.254019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359685810.mount: Deactivated successfully. Sep 13 00:52:07.258546 env[1309]: time="2025-09-13T00:52:07.258488814Z" level=info msg="CreateContainer within sandbox \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\"" Sep 13 00:52:07.260884 env[1309]: time="2025-09-13T00:52:07.260846391Z" level=info msg="StartContainer for \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\"" Sep 13 00:52:07.339953 env[1309]: time="2025-09-13T00:52:07.337290705Z" level=info msg="StartContainer for \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\" returns successfully" Sep 13 00:52:07.722205 kubelet[2066]: E0913 00:52:07.720639 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:07.737500 env[1309]: time="2025-09-13T00:52:07.737441144Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:52:07.742716 kubelet[2066]: E0913 00:52:07.742655 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:07.770323 env[1309]: time="2025-09-13T00:52:07.770224421Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\"" Sep 13 00:52:07.771005 env[1309]: time="2025-09-13T00:52:07.770966381Z" level=info msg="StartContainer for \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\"" Sep 13 00:52:07.907829 env[1309]: time="2025-09-13T00:52:07.907771054Z" level=info msg="StartContainer for \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\" returns successfully" Sep 13 00:52:07.957243 env[1309]: time="2025-09-13T00:52:07.957193157Z" level=info msg="shim disconnected" id=27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd Sep 13 00:52:07.957617 env[1309]: time="2025-09-13T00:52:07.957594225Z" level=warning msg="cleaning up after shim disconnected" id=27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd namespace=k8s.io Sep 13 00:52:07.957718 env[1309]: time="2025-09-13T00:52:07.957702168Z" level=info msg="cleaning up dead shim" Sep 13 00:52:07.979103 env[1309]: time="2025-09-13T00:52:07.978983838Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2693 runtime=io.containerd.runc.v2\n" Sep 13 00:52:08.564466 update_engine[1298]: I0913 00:52:08.564386 1298 update_attempter.cc:509] Updating boot flags... Sep 13 00:52:08.746513 kubelet[2066]: E0913 00:52:08.746477 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:08.752055 kubelet[2066]: E0913 00:52:08.747688 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:08.752146 env[1309]: time="2025-09-13T00:52:08.751252550Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:52:08.765306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697649923.mount: Deactivated successfully. Sep 13 00:52:08.775546 env[1309]: time="2025-09-13T00:52:08.774731648Z" level=info msg="CreateContainer within sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\"" Sep 13 00:52:08.776928 env[1309]: time="2025-09-13T00:52:08.776000183Z" level=info msg="StartContainer for \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\"" Sep 13 00:52:08.782830 kubelet[2066]: I0913 00:52:08.782750 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7grbj" podStartSLOduration=2.276291074 podStartE2EDuration="12.782728931s" podCreationTimestamp="2025-09-13 00:51:56 +0000 UTC" firstStartedPulling="2025-09-13 00:51:56.72464882 +0000 UTC m=+5.307501437" lastFinishedPulling="2025-09-13 00:52:07.231086665 +0000 UTC m=+15.813939294" observedRunningTime="2025-09-13 00:52:07.805342323 +0000 UTC m=+16.388194960" watchObservedRunningTime="2025-09-13 00:52:08.782728931 +0000 UTC m=+17.365581559" Sep 13 00:52:08.861644 env[1309]: time="2025-09-13T00:52:08.861509638Z" level=info msg="StartContainer for \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\" returns successfully" Sep 13 00:52:09.052111 kubelet[2066]: I0913 00:52:09.052071 2066 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:52:09.145966 kubelet[2066]: I0913 00:52:09.145739 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6af88115-139b-4e35-83a8-ae3bda63feaa-config-volume\") pod \"coredns-7c65d6cfc9-nvcwr\" (UID: \"6af88115-139b-4e35-83a8-ae3bda63feaa\") " pod="kube-system/coredns-7c65d6cfc9-nvcwr" Sep 13 00:52:09.146296 kubelet[2066]: I0913 00:52:09.146268 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbd8\" (UniqueName: \"kubernetes.io/projected/6af88115-139b-4e35-83a8-ae3bda63feaa-kube-api-access-7hbd8\") pod \"coredns-7c65d6cfc9-nvcwr\" (UID: \"6af88115-139b-4e35-83a8-ae3bda63feaa\") " pod="kube-system/coredns-7c65d6cfc9-nvcwr" Sep 13 00:52:09.146597 kubelet[2066]: I0913 00:52:09.146578 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtxwd\" (UniqueName: \"kubernetes.io/projected/0a2c40ce-b4b9-4693-bcd1-8279badae4c7-kube-api-access-qtxwd\") pod \"coredns-7c65d6cfc9-xmmp6\" (UID: \"0a2c40ce-b4b9-4693-bcd1-8279badae4c7\") " pod="kube-system/coredns-7c65d6cfc9-xmmp6" Sep 13 00:52:09.146780 kubelet[2066]: I0913 00:52:09.146764 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2c40ce-b4b9-4693-bcd1-8279badae4c7-config-volume\") pod \"coredns-7c65d6cfc9-xmmp6\" (UID: \"0a2c40ce-b4b9-4693-bcd1-8279badae4c7\") " pod="kube-system/coredns-7c65d6cfc9-xmmp6" Sep 13 00:52:09.382000 kubelet[2066]: E0913 00:52:09.381965 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:09.383111 env[1309]: time="2025-09-13T00:52:09.383055041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nvcwr,Uid:6af88115-139b-4e35-83a8-ae3bda63feaa,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:09.393861 kubelet[2066]: E0913 00:52:09.393826 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:09.394406 env[1309]: time="2025-09-13T00:52:09.394367062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xmmp6,Uid:0a2c40ce-b4b9-4693-bcd1-8279badae4c7,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:09.751474 kubelet[2066]: E0913 00:52:09.751438 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:09.783455 kubelet[2066]: I0913 00:52:09.783365 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8h9cj" podStartSLOduration=5.238112979 podStartE2EDuration="13.783333042s" podCreationTimestamp="2025-09-13 00:51:56 +0000 UTC" firstStartedPulling="2025-09-13 00:51:56.529251912 +0000 UTC m=+5.112104527" lastFinishedPulling="2025-09-13 00:52:05.074471974 +0000 UTC m=+13.657324590" observedRunningTime="2025-09-13 00:52:09.781063063 +0000 UTC m=+18.363915702" watchObservedRunningTime="2025-09-13 00:52:09.783333042 +0000 UTC m=+18.366185680" Sep 13 00:52:10.752875 kubelet[2066]: E0913 00:52:10.752824 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:11.331987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:52:11.333056 systemd-networkd[1062]: cilium_host: Link UP Sep 13 00:52:11.333363 systemd-networkd[1062]: cilium_net: Link UP Sep 13 00:52:11.333369 systemd-networkd[1062]: cilium_net: Gained carrier Sep 13 00:52:11.333613 systemd-networkd[1062]: cilium_host: Gained carrier Sep 13 00:52:11.333856 systemd-networkd[1062]: cilium_host: Gained IPv6LL Sep 13 00:52:11.468526 systemd-networkd[1062]: cilium_vxlan: Link UP Sep 13 00:52:11.468535 systemd-networkd[1062]: cilium_vxlan: Gained carrier Sep 13 00:52:11.755163 kubelet[2066]: E0913 00:52:11.755121 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:11.784160 systemd-networkd[1062]: cilium_net: Gained IPv6LL Sep 13 00:52:11.859973 kernel: NET: Registered PF_ALG protocol family Sep 13 00:52:12.722716 systemd-networkd[1062]: lxc_health: Link UP Sep 13 00:52:12.729999 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:52:12.729868 systemd-networkd[1062]: lxc_health: Gained carrier Sep 13 00:52:12.978272 systemd-networkd[1062]: lxc98fd2b9dacd7: Link UP Sep 13 00:52:12.985986 kernel: eth0: renamed from tmp13a02 Sep 13 00:52:12.993818 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc98fd2b9dacd7: link becomes ready Sep 13 00:52:12.994099 systemd-networkd[1062]: lxc98fd2b9dacd7: Gained carrier Sep 13 00:52:13.010578 systemd-networkd[1062]: lxc10a3b5919ad9: Link UP Sep 13 00:52:13.019947 kernel: eth0: renamed from tmpe82dc Sep 13 00:52:13.025067 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc10a3b5919ad9: link becomes ready Sep 13 00:52:13.024887 systemd-networkd[1062]: lxc10a3b5919ad9: Gained carrier Sep 13 00:52:13.392126 systemd-networkd[1062]: cilium_vxlan: Gained IPv6LL Sep 13 00:52:13.904197 systemd-networkd[1062]: lxc_health: Gained IPv6LL Sep 13 00:52:14.441250 kubelet[2066]: E0913 00:52:14.441195 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:14.544164 systemd-networkd[1062]: lxc10a3b5919ad9: Gained IPv6LL Sep 13 00:52:14.764124 kubelet[2066]: E0913 00:52:14.763970 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:15.056110 systemd-networkd[1062]: lxc98fd2b9dacd7: Gained IPv6LL Sep 13 00:52:15.765858 kubelet[2066]: E0913 00:52:15.765826 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:17.356975 env[1309]: time="2025-09-13T00:52:17.356870320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:17.357591 env[1309]: time="2025-09-13T00:52:17.357453042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:17.357591 env[1309]: time="2025-09-13T00:52:17.357470013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:17.358062 env[1309]: time="2025-09-13T00:52:17.357836439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13a02e6a259178d2cde2df30bc79652c2761b9f9f13f3c608d22ef29c56fff49 pid=3263 runtime=io.containerd.runc.v2 Sep 13 00:52:17.370002 env[1309]: time="2025-09-13T00:52:17.369413920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:17.370002 env[1309]: time="2025-09-13T00:52:17.369460370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:17.370002 env[1309]: time="2025-09-13T00:52:17.369470706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:17.372015 env[1309]: time="2025-09-13T00:52:17.370860821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e82dcd1ddf35ac684aa8529795bae2ca2df807b91417ff81bc27e69d83d26f4d pid=3277 runtime=io.containerd.runc.v2 Sep 13 00:52:17.413288 systemd[1]: run-containerd-runc-k8s.io-13a02e6a259178d2cde2df30bc79652c2761b9f9f13f3c608d22ef29c56fff49-runc.VgM2l4.mount: Deactivated successfully. Sep 13 00:52:17.491268 env[1309]: time="2025-09-13T00:52:17.491222282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nvcwr,Uid:6af88115-139b-4e35-83a8-ae3bda63feaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"13a02e6a259178d2cde2df30bc79652c2761b9f9f13f3c608d22ef29c56fff49\"" Sep 13 00:52:17.492530 kubelet[2066]: E0913 00:52:17.492322 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:17.494900 env[1309]: time="2025-09-13T00:52:17.494860993Z" level=info msg="CreateContainer within sandbox \"13a02e6a259178d2cde2df30bc79652c2761b9f9f13f3c608d22ef29c56fff49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:52:17.519943 env[1309]: time="2025-09-13T00:52:17.518195507Z" level=info msg="CreateContainer within sandbox \"13a02e6a259178d2cde2df30bc79652c2761b9f9f13f3c608d22ef29c56fff49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86f89bdf2c7264191b4d6025b96100a28fe9edc1456d0c607597797c39b2aa2d\"" Sep 13 00:52:17.520740 env[1309]: time="2025-09-13T00:52:17.520686405Z" level=info msg="StartContainer for \"86f89bdf2c7264191b4d6025b96100a28fe9edc1456d0c607597797c39b2aa2d\"" Sep 13 00:52:17.547946 env[1309]: time="2025-09-13T00:52:17.540800732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xmmp6,Uid:0a2c40ce-b4b9-4693-bcd1-8279badae4c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e82dcd1ddf35ac684aa8529795bae2ca2df807b91417ff81bc27e69d83d26f4d\"" Sep 13 00:52:17.548137 kubelet[2066]: E0913 00:52:17.541626 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:17.557095 env[1309]: time="2025-09-13T00:52:17.557054161Z" level=info msg="CreateContainer within sandbox \"e82dcd1ddf35ac684aa8529795bae2ca2df807b91417ff81bc27e69d83d26f4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:52:17.567600 env[1309]: time="2025-09-13T00:52:17.567549514Z" level=info msg="CreateContainer within sandbox \"e82dcd1ddf35ac684aa8529795bae2ca2df807b91417ff81bc27e69d83d26f4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"896d8d889be59460dd462c311f1e31a0f1a06e69158e976974c445b7e5fc559a\"" Sep 13 00:52:17.575888 env[1309]: time="2025-09-13T00:52:17.571246872Z" level=info msg="StartContainer for \"896d8d889be59460dd462c311f1e31a0f1a06e69158e976974c445b7e5fc559a\"" Sep 13 00:52:17.626002 env[1309]: time="2025-09-13T00:52:17.624826153Z" level=info msg="StartContainer for \"86f89bdf2c7264191b4d6025b96100a28fe9edc1456d0c607597797c39b2aa2d\" returns successfully" Sep 13 00:52:17.652175 env[1309]: time="2025-09-13T00:52:17.652130571Z" level=info msg="StartContainer for \"896d8d889be59460dd462c311f1e31a0f1a06e69158e976974c445b7e5fc559a\" returns successfully" Sep 13 00:52:17.774503 kubelet[2066]: E0913 00:52:17.774461 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:17.777552 kubelet[2066]: E0913 00:52:17.777455 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:17.798941 kubelet[2066]: I0913 00:52:17.798858 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nvcwr" podStartSLOduration=21.798812515 podStartE2EDuration="21.798812515s" podCreationTimestamp="2025-09-13 00:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:17.797048614 +0000 UTC m=+26.379901251" watchObservedRunningTime="2025-09-13 00:52:17.798812515 +0000 UTC m=+26.381665152" Sep 13 00:52:18.779685 kubelet[2066]: E0913 00:52:18.779633 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:18.780573 kubelet[2066]: E0913 00:52:18.780318 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:18.796219 kubelet[2066]: I0913 00:52:18.796159 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xmmp6" podStartSLOduration=22.796127695 podStartE2EDuration="22.796127695s" podCreationTimestamp="2025-09-13 00:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:17.833044253 +0000 UTC m=+26.415896891" watchObservedRunningTime="2025-09-13 00:52:18.796127695 +0000 UTC m=+27.378980332" Sep 13 00:52:19.781571 kubelet[2066]: E0913 00:52:19.781534 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:19.782641 kubelet[2066]: E0913 00:52:19.782531 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:52:30.665634 systemd[1]: Started sshd@5-143.110.227.187:22-147.75.109.163:36272.service. Sep 13 00:52:30.727381 sshd[3419]: Accepted publickey for core from 147.75.109.163 port 36272 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:52:30.730350 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:30.737987 systemd-logind[1297]: New session 6 of user core. Sep 13 00:52:30.739451 systemd[1]: Started session-6.scope. Sep 13 00:52:30.956606 sshd[3419]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:30.960602 systemd[1]: sshd@5-143.110.227.187:22-147.75.109.163:36272.service: Deactivated successfully. Sep 13 00:52:30.962117 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:52:30.962557 systemd-logind[1297]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:52:30.963578 systemd-logind[1297]: Removed session 6. Sep 13 00:52:35.964065 systemd[1]: Started sshd@6-143.110.227.187:22-147.75.109.163:36274.service. Sep 13 00:52:36.012136 sshd[3433]: Accepted publickey for core from 147.75.109.163 port 36274 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:52:36.014407 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:36.019935 systemd-logind[1297]: New session 7 of user core. Sep 13 00:52:36.021050 systemd[1]: Started session-7.scope. Sep 13 00:52:36.173009 sshd[3433]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:36.176236 systemd-logind[1297]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:52:36.176414 systemd[1]: sshd@6-143.110.227.187:22-147.75.109.163:36274.service: Deactivated successfully. Sep 13 00:52:36.177305 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:52:36.178065 systemd-logind[1297]: Removed session 7. Sep 13 00:52:41.177558 systemd[1]: Started sshd@7-143.110.227.187:22-147.75.109.163:58616.service. Sep 13 00:52:41.227502 sshd[3446]: Accepted publickey for core from 147.75.109.163 port 58616 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:52:41.229181 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:41.235006 systemd[1]: Started session-8.scope. Sep 13 00:52:41.235216 systemd-logind[1297]: New session 8 of user core. Sep 13 00:52:41.370197 sshd[3446]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:41.373989 systemd-logind[1297]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:52:41.374090 systemd[1]: sshd@7-143.110.227.187:22-147.75.109.163:58616.service: Deactivated successfully. Sep 13 00:52:41.375016 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:52:41.375846 systemd-logind[1297]: Removed session 8. Sep 13 00:52:46.376006 systemd[1]: Started sshd@8-143.110.227.187:22-147.75.109.163:58630.service. Sep 13 00:52:46.431251 sshd[3460]: Accepted publickey for core from 147.75.109.163 port 58630 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:52:46.433648 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:46.440014 systemd[1]: Started session-9.scope. Sep 13 00:52:46.440234 systemd-logind[1297]: New session 9 of user core. Sep 13 00:52:46.594217 sshd[3460]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:46.597516 systemd-logind[1297]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:52:46.598389 systemd[1]: sshd@8-143.110.227.187:22-147.75.109.163:58630.service: Deactivated successfully. Sep 13 00:52:46.599717 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:52:46.601101 systemd-logind[1297]: Removed session 9. Sep 13 00:52:51.599844 systemd[1]: Started sshd@9-143.110.227.187:22-147.75.109.163:50802.service. Sep 13 00:52:51.649943 sshd[3473]: Accepted publickey for core from 147.75.109.163 port 50802 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:52:51.651513 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:51.657577 systemd[1]: Started session-10.scope. Sep 13 00:52:51.658790 systemd-logind[1297]: New session 10 of user core. Sep 13 00:52:51.798202 sshd[3473]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:51.804070 systemd[1]: Started sshd@10-143.110.227.187:22-147.75.109.163:50818.service. Sep 13 00:52:51.804754 systemd[1]: sshd@9-143.110.227.187:22-147.75.109.163:50802.service: Deactivated successfully. Sep 13 00:52:51.810258 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:52:51.810353 systemd-logind[1297]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:52:51.813180 systemd-logind[1297]: Removed session 10. Sep 13 00:52:51.859197 sshd[3487]: Accepted publickey for core from 147.75.109.163 port 50818 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:52:51.859282 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:51.863986 systemd-logind[1297]: New session 11 of user core. Sep 13 00:52:51.864585 systemd[1]: Started session-11.scope. Sep 13 00:52:52.122542 sshd[3487]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:52.124301 systemd[1]: Started sshd@11-143.110.227.187:22-147.75.109.163:50826.service. Sep 13 00:52:52.142205 systemd-logind[1297]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:52:52.143712 systemd[1]: sshd@10-143.110.227.187:22-147.75.109.163:50818.service: Deactivated successfully. Sep 13 00:52:52.144762 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:52:52.146816 systemd-logind[1297]: Removed session 11. Sep 13 00:52:52.204575 sshd[3498]: Accepted publickey for core from 147.75.109.163 port 50826 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:52:52.206670 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:52.213250 systemd[1]: Started session-12.scope. Sep 13 00:52:52.213685 systemd-logind[1297]: New session 12 of user core. Sep 13 00:52:52.364185 sshd[3498]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:52.367884 systemd[1]: sshd@11-143.110.227.187:22-147.75.109.163:50826.service: Deactivated successfully. Sep 13 00:52:52.368754 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:52:52.369448 systemd-logind[1297]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:52:52.370332 systemd-logind[1297]: Removed session 12. Sep 13 00:52:57.368725 systemd[1]: Started sshd@12-143.110.227.187:22-147.75.109.163:50836.service. Sep 13 00:52:57.417949 sshd[3513]: Accepted publickey for core from 147.75.109.163 port 50836 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:52:57.420293 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:57.426559 systemd[1]: Started session-13.scope. Sep 13 00:52:57.427026 systemd-logind[1297]: New session 13 of user core. Sep 13 00:52:57.575362 sshd[3513]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:57.578724 systemd[1]: sshd@12-143.110.227.187:22-147.75.109.163:50836.service: Deactivated successfully. Sep 13 00:52:57.580870 systemd-logind[1297]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:52:57.581200 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:52:57.582320 systemd-logind[1297]: Removed session 13. Sep 13 00:53:02.580086 systemd[1]: Started sshd@13-143.110.227.187:22-147.75.109.163:56046.service. Sep 13 00:53:02.630989 sshd[3528]: Accepted publickey for core from 147.75.109.163 port 56046 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:02.633385 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:02.639650 systemd[1]: Started session-14.scope. Sep 13 00:53:02.640281 systemd-logind[1297]: New session 14 of user core. Sep 13 00:53:02.794201 sshd[3528]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:02.798448 systemd[1]: Started sshd@14-143.110.227.187:22-147.75.109.163:56056.service. Sep 13 00:53:02.800479 systemd[1]: sshd@13-143.110.227.187:22-147.75.109.163:56046.service: Deactivated successfully. Sep 13 00:53:02.802192 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:53:02.802971 systemd-logind[1297]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:53:02.804467 systemd-logind[1297]: Removed session 14. Sep 13 00:53:02.848652 sshd[3539]: Accepted publickey for core from 147.75.109.163 port 56056 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:02.851770 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:02.858840 systemd[1]: Started session-15.scope. Sep 13 00:53:02.860007 systemd-logind[1297]: New session 15 of user core. Sep 13 00:53:03.212166 sshd[3539]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:03.217296 systemd[1]: Started sshd@15-143.110.227.187:22-147.75.109.163:56062.service. Sep 13 00:53:03.221492 systemd-logind[1297]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:53:03.222104 systemd[1]: sshd@14-143.110.227.187:22-147.75.109.163:56056.service: Deactivated successfully. Sep 13 00:53:03.224337 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:53:03.226310 systemd-logind[1297]: Removed session 15. Sep 13 00:53:03.283249 sshd[3550]: Accepted publickey for core from 147.75.109.163 port 56062 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:03.288331 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:03.294951 systemd[1]: Started session-16.scope. Sep 13 00:53:03.295511 systemd-logind[1297]: New session 16 of user core. Sep 13 00:53:04.970218 systemd[1]: Started sshd@16-143.110.227.187:22-147.75.109.163:56072.service. Sep 13 00:53:04.973286 sshd[3550]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:04.979862 systemd[1]: sshd@15-143.110.227.187:22-147.75.109.163:56062.service: Deactivated successfully. Sep 13 00:53:04.981166 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:53:04.987270 systemd-logind[1297]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:53:04.988665 systemd-logind[1297]: Removed session 16. Sep 13 00:53:05.037289 sshd[3566]: Accepted publickey for core from 147.75.109.163 port 56072 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:05.039784 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:05.045942 systemd-logind[1297]: New session 17 of user core. Sep 13 00:53:05.046840 systemd[1]: Started session-17.scope. Sep 13 00:53:05.393387 sshd[3566]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:05.398337 systemd[1]: Started sshd@17-143.110.227.187:22-147.75.109.163:56086.service. Sep 13 00:53:05.403123 systemd[1]: sshd@16-143.110.227.187:22-147.75.109.163:56072.service: Deactivated successfully. Sep 13 00:53:05.404354 systemd-logind[1297]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:53:05.404449 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:53:05.411815 systemd-logind[1297]: Removed session 17. Sep 13 00:53:05.458496 sshd[3579]: Accepted publickey for core from 147.75.109.163 port 56086 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:05.460957 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:05.468149 systemd-logind[1297]: New session 18 of user core. Sep 13 00:53:05.468654 systemd[1]: Started session-18.scope. Sep 13 00:53:05.618451 sshd[3579]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:05.622759 systemd[1]: sshd@17-143.110.227.187:22-147.75.109.163:56086.service: Deactivated successfully. Sep 13 00:53:05.624850 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:53:05.625640 systemd-logind[1297]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:53:05.627168 systemd-logind[1297]: Removed session 18. Sep 13 00:53:10.623182 systemd[1]: Started sshd@18-143.110.227.187:22-147.75.109.163:56214.service. Sep 13 00:53:10.639078 kubelet[2066]: E0913 00:53:10.639038 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:10.674622 sshd[3594]: Accepted publickey for core from 147.75.109.163 port 56214 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:10.676906 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:10.687907 systemd[1]: Started session-19.scope. Sep 13 00:53:10.688367 systemd-logind[1297]: New session 19 of user core. Sep 13 00:53:10.825489 sshd[3594]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:10.830523 systemd[1]: sshd@18-143.110.227.187:22-147.75.109.163:56214.service: Deactivated successfully. Sep 13 00:53:10.831840 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:53:10.832246 systemd-logind[1297]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:53:10.833371 systemd-logind[1297]: Removed session 19. Sep 13 00:53:11.638626 kubelet[2066]: E0913 00:53:11.638585 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:12.638501 kubelet[2066]: E0913 00:53:12.638457 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:14.643283 kubelet[2066]: E0913 00:53:14.643240 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:15.830220 systemd[1]: Started sshd@19-143.110.227.187:22-147.75.109.163:56228.service. Sep 13 00:53:15.881937 sshd[3609]: Accepted publickey for core from 147.75.109.163 port 56228 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:15.882886 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:15.905111 systemd[1]: Started session-20.scope. Sep 13 00:53:15.907442 systemd-logind[1297]: New session 20 of user core. Sep 13 00:53:16.044640 sshd[3609]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:16.052599 systemd[1]: sshd@19-143.110.227.187:22-147.75.109.163:56228.service: Deactivated successfully. Sep 13 00:53:16.054929 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:53:16.055826 systemd-logind[1297]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:53:16.058112 systemd-logind[1297]: Removed session 20. Sep 13 00:53:16.638727 kubelet[2066]: E0913 00:53:16.638690 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:21.051834 systemd[1]: Started sshd@20-143.110.227.187:22-147.75.109.163:46150.service. Sep 13 00:53:21.103892 sshd[3621]: Accepted publickey for core from 147.75.109.163 port 46150 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:21.104678 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:21.110649 systemd[1]: Started session-21.scope. Sep 13 00:53:21.112070 systemd-logind[1297]: New session 21 of user core. Sep 13 00:53:21.249659 sshd[3621]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:21.253488 systemd[1]: sshd@20-143.110.227.187:22-147.75.109.163:46150.service: Deactivated successfully. Sep 13 00:53:21.255820 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:53:21.256335 systemd-logind[1297]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:53:21.258886 systemd-logind[1297]: Removed session 21. Sep 13 00:53:24.637587 kubelet[2066]: E0913 00:53:24.637524 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:26.253875 systemd[1]: Started sshd@21-143.110.227.187:22-147.75.109.163:46154.service. Sep 13 00:53:26.304150 sshd[3634]: Accepted publickey for core from 147.75.109.163 port 46154 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:26.305840 sshd[3634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:26.313128 systemd[1]: Started session-22.scope. Sep 13 00:53:26.313469 systemd-logind[1297]: New session 22 of user core. Sep 13 00:53:26.445907 sshd[3634]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:26.448739 systemd[1]: sshd@21-143.110.227.187:22-147.75.109.163:46154.service: Deactivated successfully. Sep 13 00:53:26.449688 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:53:26.450792 systemd-logind[1297]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:53:26.451776 systemd-logind[1297]: Removed session 22. Sep 13 00:53:31.450853 systemd[1]: Started sshd@22-143.110.227.187:22-147.75.109.163:38256.service. Sep 13 00:53:31.507970 sshd[3649]: Accepted publickey for core from 147.75.109.163 port 38256 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:31.510846 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:31.518381 systemd[1]: Started session-23.scope. Sep 13 00:53:31.518937 systemd-logind[1297]: New session 23 of user core. Sep 13 00:53:31.649032 sshd[3649]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:31.651778 systemd[1]: sshd@22-143.110.227.187:22-147.75.109.163:38256.service: Deactivated successfully. Sep 13 00:53:31.653196 systemd-logind[1297]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:53:31.653737 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:53:31.654667 systemd-logind[1297]: Removed session 23. Sep 13 00:53:36.654201 systemd[1]: Started sshd@23-143.110.227.187:22-147.75.109.163:38268.service. Sep 13 00:53:36.701193 sshd[3662]: Accepted publickey for core from 147.75.109.163 port 38268 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:36.702785 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:36.709194 systemd[1]: Started session-24.scope. Sep 13 00:53:36.709851 systemd-logind[1297]: New session 24 of user core. Sep 13 00:53:36.847006 sshd[3662]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:36.853836 systemd[1]: Started sshd@24-143.110.227.187:22-147.75.109.163:38280.service. Sep 13 00:53:36.855160 systemd[1]: sshd@23-143.110.227.187:22-147.75.109.163:38268.service: Deactivated successfully. Sep 13 00:53:36.857442 systemd-logind[1297]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:53:36.857761 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:53:36.859026 systemd-logind[1297]: Removed session 24. Sep 13 00:53:36.908882 sshd[3673]: Accepted publickey for core from 147.75.109.163 port 38280 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:36.910259 sshd[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:36.916235 systemd[1]: Started session-25.scope. Sep 13 00:53:36.916488 systemd-logind[1297]: New session 25 of user core. Sep 13 00:53:37.638576 kubelet[2066]: E0913 00:53:37.638533 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:38.939087 env[1309]: time="2025-09-13T00:53:38.939025034Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:53:38.952152 env[1309]: time="2025-09-13T00:53:38.952110375Z" level=info msg="StopContainer for \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\" with timeout 2 (s)" Sep 13 00:53:38.952403 env[1309]: time="2025-09-13T00:53:38.952370553Z" level=info msg="StopContainer for \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\" with timeout 30 (s)" Sep 13 00:53:38.952741 env[1309]: time="2025-09-13T00:53:38.952715184Z" level=info msg="Stop container \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\" with signal terminated" Sep 13 00:53:38.953000 env[1309]: time="2025-09-13T00:53:38.952978592Z" level=info msg="Stop container \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\" with signal terminated" Sep 13 00:53:38.971051 systemd-networkd[1062]: lxc_health: Link DOWN Sep 13 00:53:38.971058 systemd-networkd[1062]: lxc_health: Lost carrier Sep 13 00:53:39.016606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41-rootfs.mount: Deactivated successfully. Sep 13 00:53:39.023130 env[1309]: time="2025-09-13T00:53:39.023072537Z" level=info msg="shim disconnected" id=e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41 Sep 13 00:53:39.023130 env[1309]: time="2025-09-13T00:53:39.023124927Z" level=warning msg="cleaning up after shim disconnected" id=e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41 namespace=k8s.io Sep 13 00:53:39.023130 env[1309]: time="2025-09-13T00:53:39.023134535Z" level=info msg="cleaning up dead shim" Sep 13 00:53:39.028929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549-rootfs.mount: Deactivated successfully. Sep 13 00:53:39.038862 env[1309]: time="2025-09-13T00:53:39.038815318Z" level=info msg="shim disconnected" id=777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549 Sep 13 00:53:39.039264 env[1309]: time="2025-09-13T00:53:39.039239915Z" level=warning msg="cleaning up after shim disconnected" id=777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549 namespace=k8s.io Sep 13 00:53:39.039378 env[1309]: time="2025-09-13T00:53:39.039361924Z" level=info msg="cleaning up dead shim" Sep 13 00:53:39.047926 env[1309]: time="2025-09-13T00:53:39.047863626Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3745 runtime=io.containerd.runc.v2\n" Sep 13 00:53:39.049687 env[1309]: time="2025-09-13T00:53:39.049648906Z" level=info msg="StopContainer for \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\" returns successfully" Sep 13 00:53:39.050663 env[1309]: time="2025-09-13T00:53:39.050626951Z" level=info msg="StopPodSandbox for \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\"" Sep 13 00:53:39.050867 env[1309]: time="2025-09-13T00:53:39.050832274Z" level=info msg="Container to stop \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:39.053710 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31-shm.mount: Deactivated successfully. Sep 13 00:53:39.058944 env[1309]: time="2025-09-13T00:53:39.058885131Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3754 runtime=io.containerd.runc.v2\n" Sep 13 00:53:39.061074 env[1309]: time="2025-09-13T00:53:39.061024411Z" level=info msg="StopContainer for \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\" returns successfully" Sep 13 00:53:39.062731 env[1309]: time="2025-09-13T00:53:39.062688751Z" level=info msg="StopPodSandbox for \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\"" Sep 13 00:53:39.062842 env[1309]: time="2025-09-13T00:53:39.062769270Z" level=info msg="Container to stop \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:39.062842 env[1309]: time="2025-09-13T00:53:39.062788561Z" level=info msg="Container to stop \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:39.062946 env[1309]: time="2025-09-13T00:53:39.062804358Z" level=info msg="Container to stop \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:39.062946 env[1309]: time="2025-09-13T00:53:39.062854901Z" level=info msg="Container to stop \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:39.062946 env[1309]: time="2025-09-13T00:53:39.062870725Z" level=info msg="Container to stop \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:39.065464 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4-shm.mount: Deactivated successfully. Sep 13 00:53:39.117931 env[1309]: time="2025-09-13T00:53:39.117657144Z" level=info msg="shim disconnected" id=f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4 Sep 13 00:53:39.118857 env[1309]: time="2025-09-13T00:53:39.118224237Z" level=warning msg="cleaning up after shim disconnected" id=f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4 namespace=k8s.io Sep 13 00:53:39.119023 env[1309]: time="2025-09-13T00:53:39.119002935Z" level=info msg="cleaning up dead shim" Sep 13 00:53:39.119360 env[1309]: time="2025-09-13T00:53:39.119323470Z" level=info msg="shim disconnected" id=4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31 Sep 13 00:53:39.121096 env[1309]: time="2025-09-13T00:53:39.121071514Z" level=warning msg="cleaning up after shim disconnected" id=4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31 namespace=k8s.io Sep 13 00:53:39.121227 env[1309]: time="2025-09-13T00:53:39.121210715Z" level=info msg="cleaning up dead shim" Sep 13 00:53:39.133397 env[1309]: time="2025-09-13T00:53:39.133348866Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3813 runtime=io.containerd.runc.v2\n" Sep 13 00:53:39.133894 env[1309]: time="2025-09-13T00:53:39.133862087Z" level=info msg="TearDown network for sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" successfully" Sep 13 00:53:39.134059 env[1309]: time="2025-09-13T00:53:39.134038211Z" level=info msg="StopPodSandbox for \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" returns successfully" Sep 13 00:53:39.138478 env[1309]: time="2025-09-13T00:53:39.138443618Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3817 runtime=io.containerd.runc.v2\n" Sep 13 00:53:39.138983 env[1309]: time="2025-09-13T00:53:39.138954921Z" level=info msg="TearDown network for sandbox \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\" successfully" Sep 13 00:53:39.140838 env[1309]: time="2025-09-13T00:53:39.140043840Z" level=info msg="StopPodSandbox for \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\" returns successfully" Sep 13 00:53:39.205359 kubelet[2066]: I0913 00:53:39.204322 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-hostproc\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205359 kubelet[2066]: I0913 00:53:39.205118 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5vjm\" (UniqueName: \"kubernetes.io/projected/22b8e83a-501b-47e5-a4ee-f0a4529e69fd-kube-api-access-v5vjm\") pod \"22b8e83a-501b-47e5-a4ee-f0a4529e69fd\" (UID: \"22b8e83a-501b-47e5-a4ee-f0a4529e69fd\") " Sep 13 00:53:39.205359 kubelet[2066]: I0913 00:53:39.205184 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-bpf-maps\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205359 kubelet[2066]: I0913 00:53:39.205201 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-etc-cni-netd\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205359 kubelet[2066]: I0913 00:53:39.205250 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-cgroup\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205359 kubelet[2066]: I0913 00:53:39.205272 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-hubble-tls\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205943 kubelet[2066]: I0913 00:53:39.205314 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-clustermesh-secrets\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205943 kubelet[2066]: I0913 00:53:39.205351 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwncc\" (UniqueName: \"kubernetes.io/projected/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-kube-api-access-pwncc\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205943 kubelet[2066]: I0913 00:53:39.205370 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-config-path\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205943 kubelet[2066]: I0913 00:53:39.205384 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-host-proc-sys-net\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205943 kubelet[2066]: I0913 00:53:39.205400 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-lib-modules\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.205943 kubelet[2066]: I0913 00:53:39.205413 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-run\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.206118 kubelet[2066]: I0913 00:53:39.205534 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cni-path\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.206118 kubelet[2066]: I0913 00:53:39.205565 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-host-proc-sys-kernel\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.206118 kubelet[2066]: I0913 00:53:39.205581 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-xtables-lock\") pod \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\" (UID: \"7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716\") " Sep 13 00:53:39.206118 kubelet[2066]: I0913 00:53:39.205597 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22b8e83a-501b-47e5-a4ee-f0a4529e69fd-cilium-config-path\") pod \"22b8e83a-501b-47e5-a4ee-f0a4529e69fd\" (UID: \"22b8e83a-501b-47e5-a4ee-f0a4529e69fd\") " Sep 13 00:53:39.215092 kubelet[2066]: I0913 00:53:39.207400 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-hostproc" (OuterVolumeSpecName: "hostproc") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.217554 kubelet[2066]: I0913 00:53:39.217230 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b8e83a-501b-47e5-a4ee-f0a4529e69fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "22b8e83a-501b-47e5-a4ee-f0a4529e69fd" (UID: "22b8e83a-501b-47e5-a4ee-f0a4529e69fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:53:39.219366 kubelet[2066]: I0913 00:53:39.219269 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:53:39.219366 kubelet[2066]: I0913 00:53:39.219333 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.219366 kubelet[2066]: I0913 00:53:39.219350 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.219366 kubelet[2066]: I0913 00:53:39.219366 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.224436 kubelet[2066]: I0913 00:53:39.224381 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:39.224436 kubelet[2066]: I0913 00:53:39.224391 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b8e83a-501b-47e5-a4ee-f0a4529e69fd-kube-api-access-v5vjm" (OuterVolumeSpecName: "kube-api-access-v5vjm") pod "22b8e83a-501b-47e5-a4ee-f0a4529e69fd" (UID: "22b8e83a-501b-47e5-a4ee-f0a4529e69fd"). InnerVolumeSpecName "kube-api-access-v5vjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:39.224736 kubelet[2066]: I0913 00:53:39.224710 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.224830 kubelet[2066]: I0913 00:53:39.224816 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.224907 kubelet[2066]: I0913 00:53:39.224894 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.225018 kubelet[2066]: I0913 00:53:39.225002 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cni-path" (OuterVolumeSpecName: "cni-path") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.225097 kubelet[2066]: I0913 00:53:39.225084 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.225174 kubelet[2066]: I0913 00:53:39.225161 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:39.229368 kubelet[2066]: I0913 00:53:39.229327 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-kube-api-access-pwncc" (OuterVolumeSpecName: "kube-api-access-pwncc") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "kube-api-access-pwncc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:39.230464 kubelet[2066]: I0913 00:53:39.230401 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" (UID: "7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:53:39.306815 kubelet[2066]: I0913 00:53:39.306771 2066 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-hubble-tls\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307096 kubelet[2066]: I0913 00:53:39.307075 2066 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-clustermesh-secrets\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307190 kubelet[2066]: I0913 00:53:39.307171 2066 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwncc\" (UniqueName: \"kubernetes.io/projected/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-kube-api-access-pwncc\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307266 kubelet[2066]: I0913 00:53:39.307254 2066 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-config-path\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307397 kubelet[2066]: I0913 00:53:39.307379 2066 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-host-proc-sys-net\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307484 kubelet[2066]: I0913 00:53:39.307471 2066 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cni-path\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307619 kubelet[2066]: I0913 00:53:39.307600 2066 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-lib-modules\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307744 kubelet[2066]: I0913 00:53:39.307723 2066 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-run\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307844 kubelet[2066]: I0913 00:53:39.307831 2066 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.307941 kubelet[2066]: I0913 00:53:39.307900 2066 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-xtables-lock\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.308019 kubelet[2066]: I0913 00:53:39.308007 2066 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22b8e83a-501b-47e5-a4ee-f0a4529e69fd-cilium-config-path\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.308088 kubelet[2066]: I0913 00:53:39.308076 2066 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-hostproc\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.308152 kubelet[2066]: I0913 00:53:39.308141 2066 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5vjm\" (UniqueName: \"kubernetes.io/projected/22b8e83a-501b-47e5-a4ee-f0a4529e69fd-kube-api-access-v5vjm\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.308214 kubelet[2066]: I0913 00:53:39.308203 2066 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-bpf-maps\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.308275 kubelet[2066]: I0913 00:53:39.308264 2066 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-etc-cni-netd\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.308342 kubelet[2066]: I0913 00:53:39.308331 2066 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716-cilium-cgroup\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:39.913369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31-rootfs.mount: Deactivated successfully. Sep 13 00:53:39.913878 systemd[1]: var-lib-kubelet-pods-22b8e83a\x2d501b\x2d47e5\x2da4ee\x2df0a4529e69fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv5vjm.mount: Deactivated successfully. Sep 13 00:53:39.914154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4-rootfs.mount: Deactivated successfully. Sep 13 00:53:39.914357 systemd[1]: var-lib-kubelet-pods-7e5ab4dd\x2d4c3a\x2d417c\x2d8bc8\x2d51f1aeb1c716-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwncc.mount: Deactivated successfully. Sep 13 00:53:39.914580 systemd[1]: var-lib-kubelet-pods-7e5ab4dd\x2d4c3a\x2d417c\x2d8bc8\x2d51f1aeb1c716-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:53:39.914770 systemd[1]: var-lib-kubelet-pods-7e5ab4dd\x2d4c3a\x2d417c\x2d8bc8\x2d51f1aeb1c716-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:53:39.982297 kubelet[2066]: I0913 00:53:39.982215 2066 scope.go:117] "RemoveContainer" containerID="e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41" Sep 13 00:53:39.992742 env[1309]: time="2025-09-13T00:53:39.992673970Z" level=info msg="RemoveContainer for \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\"" Sep 13 00:53:40.000575 env[1309]: time="2025-09-13T00:53:40.000502364Z" level=info msg="RemoveContainer for \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\" returns successfully" Sep 13 00:53:40.001925 kubelet[2066]: I0913 00:53:40.001884 2066 scope.go:117] "RemoveContainer" containerID="e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41" Sep 13 00:53:40.003767 env[1309]: time="2025-09-13T00:53:40.003668035Z" level=error msg="ContainerStatus for \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\": not found" Sep 13 00:53:40.012659 kubelet[2066]: E0913 00:53:40.012609 2066 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\": not found" containerID="e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41" Sep 13 00:53:40.013138 kubelet[2066]: I0913 00:53:40.013005 2066 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41"} err="failed to get container status \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\": rpc error: code = NotFound desc = an error occurred when try to find container \"e439748d621fcd2b845086be1a06144d193de05258d174130b43ca381ad07e41\": not found" Sep 13 00:53:40.013276 kubelet[2066]: I0913 00:53:40.013257 2066 scope.go:117] "RemoveContainer" containerID="777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549" Sep 13 00:53:40.016310 env[1309]: time="2025-09-13T00:53:40.016274879Z" level=info msg="RemoveContainer for \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\"" Sep 13 00:53:40.021308 env[1309]: time="2025-09-13T00:53:40.021250841Z" level=info msg="RemoveContainer for \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\" returns successfully" Sep 13 00:53:40.022224 kubelet[2066]: I0913 00:53:40.022202 2066 scope.go:117] "RemoveContainer" containerID="27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd" Sep 13 00:53:40.025691 env[1309]: time="2025-09-13T00:53:40.025650425Z" level=info msg="RemoveContainer for \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\"" Sep 13 00:53:40.029076 env[1309]: time="2025-09-13T00:53:40.029027882Z" level=info msg="RemoveContainer for \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\" returns successfully" Sep 13 00:53:40.029453 kubelet[2066]: I0913 00:53:40.029428 2066 scope.go:117] "RemoveContainer" containerID="7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c" Sep 13 00:53:40.031178 env[1309]: time="2025-09-13T00:53:40.031124126Z" level=info msg="RemoveContainer for \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\"" Sep 13 00:53:40.033488 env[1309]: time="2025-09-13T00:53:40.033449210Z" level=info msg="RemoveContainer for \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\" returns successfully" Sep 13 00:53:40.033976 kubelet[2066]: I0913 00:53:40.033954 2066 scope.go:117] "RemoveContainer" containerID="47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790" Sep 13 00:53:40.035738 env[1309]: time="2025-09-13T00:53:40.035698210Z" level=info msg="RemoveContainer for \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\"" Sep 13 00:53:40.041417 env[1309]: time="2025-09-13T00:53:40.041355823Z" level=info msg="RemoveContainer for \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\" returns successfully" Sep 13 00:53:40.041932 kubelet[2066]: I0913 00:53:40.041888 2066 scope.go:117] "RemoveContainer" containerID="21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e" Sep 13 00:53:40.043784 env[1309]: time="2025-09-13T00:53:40.043720613Z" level=info msg="RemoveContainer for \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\"" Sep 13 00:53:40.046111 env[1309]: time="2025-09-13T00:53:40.046068782Z" level=info msg="RemoveContainer for \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\" returns successfully" Sep 13 00:53:40.047365 kubelet[2066]: I0913 00:53:40.047337 2066 scope.go:117] "RemoveContainer" containerID="777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549" Sep 13 00:53:40.048186 env[1309]: time="2025-09-13T00:53:40.047815895Z" level=error msg="ContainerStatus for \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\": not found" Sep 13 00:53:40.048404 kubelet[2066]: E0913 00:53:40.048375 2066 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\": not found" containerID="777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549" Sep 13 00:53:40.048554 kubelet[2066]: I0913 00:53:40.048523 2066 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549"} err="failed to get container status \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\": rpc error: code = NotFound desc = an error occurred when try to find container \"777710efc3dae47521e089a6e02f648fed1df1a3d569a69b1b01727d3bd30549\": not found" Sep 13 00:53:40.048659 kubelet[2066]: I0913 00:53:40.048644 2066 scope.go:117] "RemoveContainer" containerID="27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd" Sep 13 00:53:40.049107 env[1309]: time="2025-09-13T00:53:40.049012747Z" level=error msg="ContainerStatus for \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\": not found" Sep 13 00:53:40.049299 kubelet[2066]: E0913 00:53:40.049277 2066 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\": not found" containerID="27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd" Sep 13 00:53:40.049421 kubelet[2066]: I0913 00:53:40.049392 2066 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd"} err="failed to get container status \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\": rpc error: code = NotFound desc = an error occurred when try to find container \"27548f6663202a164e382512a67fd38e59e27d6dd9a888fabf076fcdad7d0bfd\": not found" Sep 13 00:53:40.049507 kubelet[2066]: I0913 00:53:40.049492 2066 scope.go:117] "RemoveContainer" containerID="7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c" Sep 13 00:53:40.049872 env[1309]: time="2025-09-13T00:53:40.049812321Z" level=error msg="ContainerStatus for \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\": not found" Sep 13 00:53:40.050173 kubelet[2066]: E0913 00:53:40.050146 2066 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\": not found" containerID="7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c" Sep 13 00:53:40.050348 kubelet[2066]: I0913 00:53:40.050325 2066 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c"} err="failed to get container status \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cf003b94b1dfd09fffa42a2d72c15246bbc8754536a34aa61d2caab42cf710c\": not found" Sep 13 00:53:40.050445 kubelet[2066]: I0913 00:53:40.050429 2066 scope.go:117] "RemoveContainer" containerID="47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790" Sep 13 00:53:40.050811 env[1309]: time="2025-09-13T00:53:40.050746401Z" level=error msg="ContainerStatus for \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\": not found" Sep 13 00:53:40.051129 kubelet[2066]: E0913 00:53:40.051108 2066 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\": not found" containerID="47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790" Sep 13 00:53:40.051269 kubelet[2066]: I0913 00:53:40.051243 2066 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790"} err="failed to get container status \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\": rpc error: code = NotFound desc = an error occurred when try to find container \"47811f7a833919e68b659dfbe89234d1b0b962d7f01efd1dfcb901b17a4c6790\": not found" Sep 13 00:53:40.051362 kubelet[2066]: I0913 00:53:40.051346 2066 scope.go:117] "RemoveContainer" containerID="21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e" Sep 13 00:53:40.051884 env[1309]: time="2025-09-13T00:53:40.051803077Z" level=error msg="ContainerStatus for \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\": not found" Sep 13 00:53:40.052055 kubelet[2066]: E0913 00:53:40.052035 2066 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\": not found" containerID="21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e" Sep 13 00:53:40.052160 kubelet[2066]: I0913 00:53:40.052142 2066 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e"} err="failed to get container status \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\": rpc error: code = NotFound desc = an error occurred when try to find container \"21fcda88c4a753792247dafc618abeae7732b530017e9e9cd34841989102957e\": not found" Sep 13 00:53:40.838226 sshd[3673]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:40.843123 systemd[1]: Started sshd@25-143.110.227.187:22-147.75.109.163:56430.service. Sep 13 00:53:40.847756 systemd[1]: sshd@24-143.110.227.187:22-147.75.109.163:38280.service: Deactivated successfully. Sep 13 00:53:40.849437 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:53:40.851085 systemd-logind[1297]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:53:40.852459 systemd-logind[1297]: Removed session 25. Sep 13 00:53:40.900047 sshd[3843]: Accepted publickey for core from 147.75.109.163 port 56430 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:40.903171 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:40.910873 systemd[1]: Started session-26.scope. Sep 13 00:53:40.912010 systemd-logind[1297]: New session 26 of user core. Sep 13 00:53:41.609236 systemd[1]: Started sshd@26-143.110.227.187:22-147.75.109.163:56444.service. Sep 13 00:53:41.615978 sshd[3843]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:41.624591 systemd[1]: sshd@25-143.110.227.187:22-147.75.109.163:56430.service: Deactivated successfully. Sep 13 00:53:41.625540 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:53:41.627131 systemd-logind[1297]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:53:41.629387 systemd-logind[1297]: Removed session 26. Sep 13 00:53:41.644196 kubelet[2066]: I0913 00:53:41.644137 2066 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b8e83a-501b-47e5-a4ee-f0a4529e69fd" path="/var/lib/kubelet/pods/22b8e83a-501b-47e5-a4ee-f0a4529e69fd/volumes" Sep 13 00:53:41.645657 kubelet[2066]: I0913 00:53:41.645607 2066 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" path="/var/lib/kubelet/pods/7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716/volumes" Sep 13 00:53:41.656757 kubelet[2066]: E0913 00:53:41.656715 2066 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" containerName="mount-cgroup" Sep 13 00:53:41.656998 kubelet[2066]: E0913 00:53:41.656983 2066 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" containerName="apply-sysctl-overwrites" Sep 13 00:53:41.657085 kubelet[2066]: E0913 00:53:41.657073 2066 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22b8e83a-501b-47e5-a4ee-f0a4529e69fd" containerName="cilium-operator" Sep 13 00:53:41.657166 kubelet[2066]: E0913 00:53:41.657156 2066 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" containerName="cilium-agent" Sep 13 00:53:41.657251 kubelet[2066]: E0913 00:53:41.657240 2066 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" containerName="mount-bpf-fs" Sep 13 00:53:41.657333 kubelet[2066]: E0913 00:53:41.657322 2066 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" containerName="clean-cilium-state" Sep 13 00:53:41.657440 kubelet[2066]: I0913 00:53:41.657430 2066 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e5ab4dd-4c3a-417c-8bc8-51f1aeb1c716" containerName="cilium-agent" Sep 13 00:53:41.657513 kubelet[2066]: I0913 00:53:41.657503 2066 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b8e83a-501b-47e5-a4ee-f0a4529e69fd" containerName="cilium-operator" Sep 13 00:53:41.729573 sshd[3854]: Accepted publickey for core from 147.75.109.163 port 56444 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:41.731535 sshd[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:41.734945 kubelet[2066]: I0913 00:53:41.733502 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-host-proc-sys-kernel\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.734945 kubelet[2066]: I0913 00:53:41.733544 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-ipsec-secrets\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.734945 kubelet[2066]: I0913 00:53:41.733563 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-host-proc-sys-net\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.734945 kubelet[2066]: I0913 00:53:41.733624 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94b22157-a8c7-4bc8-96ed-7625c829d735-hubble-tls\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.734945 kubelet[2066]: I0913 00:53:41.733640 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckw2j\" (UniqueName: \"kubernetes.io/projected/94b22157-a8c7-4bc8-96ed-7625c829d735-kube-api-access-ckw2j\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735199 kubelet[2066]: I0913 00:53:41.733661 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-hostproc\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735199 kubelet[2066]: I0913 00:53:41.733685 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-lib-modules\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735199 kubelet[2066]: I0913 00:53:41.733705 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cni-path\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735199 kubelet[2066]: I0913 00:53:41.733932 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-config-path\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735199 kubelet[2066]: I0913 00:53:41.733986 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-cgroup\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735199 kubelet[2066]: I0913 00:53:41.734001 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-xtables-lock\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735430 kubelet[2066]: I0913 00:53:41.734022 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-bpf-maps\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735430 kubelet[2066]: I0913 00:53:41.734036 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-etc-cni-netd\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735430 kubelet[2066]: I0913 00:53:41.734054 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-run\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.735430 kubelet[2066]: I0913 00:53:41.734069 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94b22157-a8c7-4bc8-96ed-7625c829d735-clustermesh-secrets\") pod \"cilium-2czlh\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " pod="kube-system/cilium-2czlh" Sep 13 00:53:41.737432 systemd[1]: Started session-27.scope. Sep 13 00:53:41.737976 systemd-logind[1297]: New session 27 of user core. Sep 13 00:53:41.775531 kubelet[2066]: E0913 00:53:41.775444 2066 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:53:41.962873 sshd[3854]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:41.965805 systemd[1]: Started sshd@27-143.110.227.187:22-147.75.109.163:56454.service. Sep 13 00:53:41.972358 systemd[1]: sshd@26-143.110.227.187:22-147.75.109.163:56444.service: Deactivated successfully. Sep 13 00:53:41.974198 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:53:41.974622 systemd-logind[1297]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:53:41.977346 systemd-logind[1297]: Removed session 27. Sep 13 00:53:41.986951 kubelet[2066]: E0913 00:53:41.986897 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:41.990653 env[1309]: time="2025-09-13T00:53:41.990227248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2czlh,Uid:94b22157-a8c7-4bc8-96ed-7625c829d735,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:42.020688 env[1309]: time="2025-09-13T00:53:42.020602733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:42.020688 env[1309]: time="2025-09-13T00:53:42.020647127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:42.020920 env[1309]: time="2025-09-13T00:53:42.020665509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:42.021157 env[1309]: time="2025-09-13T00:53:42.021059564Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57 pid=3881 runtime=io.containerd.runc.v2 Sep 13 00:53:42.058186 sshd[3871]: Accepted publickey for core from 147.75.109.163 port 56454 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:53:42.057215 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:42.065179 systemd[1]: Started session-28.scope. Sep 13 00:53:42.065441 systemd-logind[1297]: New session 28 of user core. Sep 13 00:53:42.116114 env[1309]: time="2025-09-13T00:53:42.116052143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2czlh,Uid:94b22157-a8c7-4bc8-96ed-7625c829d735,Namespace:kube-system,Attempt:0,} returns sandbox id \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\"" Sep 13 00:53:42.117321 kubelet[2066]: E0913 00:53:42.117295 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:42.121585 env[1309]: time="2025-09-13T00:53:42.121538365Z" level=info msg="CreateContainer within sandbox \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:53:42.133096 env[1309]: time="2025-09-13T00:53:42.133036972Z" level=info msg="CreateContainer within sandbox \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece\"" Sep 13 00:53:42.134102 env[1309]: time="2025-09-13T00:53:42.134056965Z" level=info msg="StartContainer for \"354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece\"" Sep 13 00:53:42.198944 env[1309]: time="2025-09-13T00:53:42.198718817Z" level=info msg="StartContainer for \"354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece\" returns successfully" Sep 13 00:53:42.239794 env[1309]: time="2025-09-13T00:53:42.239683644Z" level=info msg="shim disconnected" id=354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece Sep 13 00:53:42.240730 env[1309]: time="2025-09-13T00:53:42.240704673Z" level=warning msg="cleaning up after shim disconnected" id=354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece namespace=k8s.io Sep 13 00:53:42.240847 env[1309]: time="2025-09-13T00:53:42.240831153Z" level=info msg="cleaning up dead shim" Sep 13 00:53:42.253687 env[1309]: time="2025-09-13T00:53:42.253635752Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3974 runtime=io.containerd.runc.v2\n" Sep 13 00:53:43.019290 env[1309]: time="2025-09-13T00:53:43.019212235Z" level=info msg="StopPodSandbox for \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\"" Sep 13 00:53:43.019790 env[1309]: time="2025-09-13T00:53:43.019344002Z" level=info msg="Container to stop \"354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:43.022478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57-shm.mount: Deactivated successfully. Sep 13 00:53:43.056027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57-rootfs.mount: Deactivated successfully. Sep 13 00:53:43.060478 env[1309]: time="2025-09-13T00:53:43.060428802Z" level=info msg="shim disconnected" id=83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57 Sep 13 00:53:43.060478 env[1309]: time="2025-09-13T00:53:43.060476936Z" level=warning msg="cleaning up after shim disconnected" id=83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57 namespace=k8s.io Sep 13 00:53:43.060752 env[1309]: time="2025-09-13T00:53:43.060486647Z" level=info msg="cleaning up dead shim" Sep 13 00:53:43.074202 env[1309]: time="2025-09-13T00:53:43.074148260Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4008 runtime=io.containerd.runc.v2\n" Sep 13 00:53:43.074496 env[1309]: time="2025-09-13T00:53:43.074465764Z" level=info msg="TearDown network for sandbox \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\" successfully" Sep 13 00:53:43.074496 env[1309]: time="2025-09-13T00:53:43.074493124Z" level=info msg="StopPodSandbox for \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\" returns successfully" Sep 13 00:53:43.143591 kubelet[2066]: I0913 00:53:43.143535 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-host-proc-sys-kernel\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.143591 kubelet[2066]: I0913 00:53:43.143602 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-config-path\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144155 kubelet[2066]: I0913 00:53:43.143628 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-host-proc-sys-net\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144155 kubelet[2066]: I0913 00:53:43.143650 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-bpf-maps\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144155 kubelet[2066]: I0913 00:53:43.143678 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-ipsec-secrets\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144155 kubelet[2066]: I0913 00:53:43.143696 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-hostproc\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144155 kubelet[2066]: I0913 00:53:43.143711 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-xtables-lock\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144155 kubelet[2066]: I0913 00:53:43.143727 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cni-path\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144368 kubelet[2066]: I0913 00:53:43.143743 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckw2j\" (UniqueName: \"kubernetes.io/projected/94b22157-a8c7-4bc8-96ed-7625c829d735-kube-api-access-ckw2j\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144368 kubelet[2066]: I0913 00:53:43.143759 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-cgroup\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144368 kubelet[2066]: I0913 00:53:43.143774 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-etc-cni-netd\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144368 kubelet[2066]: I0913 00:53:43.143787 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-run\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144368 kubelet[2066]: I0913 00:53:43.143805 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94b22157-a8c7-4bc8-96ed-7625c829d735-clustermesh-secrets\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144368 kubelet[2066]: I0913 00:53:43.143820 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-lib-modules\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144631 kubelet[2066]: I0913 00:53:43.143840 2066 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94b22157-a8c7-4bc8-96ed-7625c829d735-hubble-tls\") pod \"94b22157-a8c7-4bc8-96ed-7625c829d735\" (UID: \"94b22157-a8c7-4bc8-96ed-7625c829d735\") " Sep 13 00:53:43.144631 kubelet[2066]: I0913 00:53:43.144236 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.144631 kubelet[2066]: I0913 00:53:43.144275 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.146664 kubelet[2066]: I0913 00:53:43.146615 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:53:43.146816 kubelet[2066]: I0913 00:53:43.146682 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.146816 kubelet[2066]: I0913 00:53:43.146702 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.150696 systemd[1]: var-lib-kubelet-pods-94b22157\x2da8c7\x2d4bc8\x2d96ed\x2d7625c829d735-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:53:43.156219 kubelet[2066]: I0913 00:53:43.154735 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:53:43.156219 kubelet[2066]: I0913 00:53:43.154799 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-hostproc" (OuterVolumeSpecName: "hostproc") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.156219 kubelet[2066]: I0913 00:53:43.154820 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.156219 kubelet[2066]: I0913 00:53:43.154835 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cni-path" (OuterVolumeSpecName: "cni-path") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.154571 systemd[1]: var-lib-kubelet-pods-94b22157\x2da8c7\x2d4bc8\x2d96ed\x2d7625c829d735-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:53:43.158547 kubelet[2066]: I0913 00:53:43.158492 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b22157-a8c7-4bc8-96ed-7625c829d735-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:43.158700 kubelet[2066]: I0913 00:53:43.158559 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.159379 kubelet[2066]: I0913 00:53:43.159343 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b22157-a8c7-4bc8-96ed-7625c829d735-kube-api-access-ckw2j" (OuterVolumeSpecName: "kube-api-access-ckw2j") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "kube-api-access-ckw2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:43.159505 kubelet[2066]: I0913 00:53:43.159408 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.159505 kubelet[2066]: I0913 00:53:43.159430 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:43.162078 kubelet[2066]: I0913 00:53:43.162044 2066 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b22157-a8c7-4bc8-96ed-7625c829d735-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "94b22157-a8c7-4bc8-96ed-7625c829d735" (UID: "94b22157-a8c7-4bc8-96ed-7625c829d735"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:53:43.244604 kubelet[2066]: I0913 00:53:43.244547 2066 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-hostproc\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.244874 kubelet[2066]: I0913 00:53:43.244847 2066 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245074 kubelet[2066]: I0913 00:53:43.245054 2066 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cni-path\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245190 kubelet[2066]: I0913 00:53:43.245170 2066 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-xtables-lock\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245292 kubelet[2066]: I0913 00:53:43.245276 2066 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckw2j\" (UniqueName: \"kubernetes.io/projected/94b22157-a8c7-4bc8-96ed-7625c829d735-kube-api-access-ckw2j\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245388 kubelet[2066]: I0913 00:53:43.245368 2066 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-cgroup\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245486 kubelet[2066]: I0913 00:53:43.245468 2066 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-etc-cni-netd\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245599 kubelet[2066]: I0913 00:53:43.245582 2066 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-run\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245702 kubelet[2066]: I0913 00:53:43.245687 2066 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-lib-modules\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245800 kubelet[2066]: I0913 00:53:43.245784 2066 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94b22157-a8c7-4bc8-96ed-7625c829d735-clustermesh-secrets\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245892 kubelet[2066]: I0913 00:53:43.245875 2066 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94b22157-a8c7-4bc8-96ed-7625c829d735-hubble-tls\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.245989 kubelet[2066]: I0913 00:53:43.245976 2066 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.246111 kubelet[2066]: I0913 00:53:43.246092 2066 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94b22157-a8c7-4bc8-96ed-7625c829d735-cilium-config-path\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.246230 kubelet[2066]: I0913 00:53:43.246213 2066 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-host-proc-sys-net\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.246317 kubelet[2066]: I0913 00:53:43.246305 2066 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94b22157-a8c7-4bc8-96ed-7625c829d735-bpf-maps\") on node \"ci-3510.3.8-n-8fedea5c61\" DevicePath \"\"" Sep 13 00:53:43.842571 systemd[1]: var-lib-kubelet-pods-94b22157\x2da8c7\x2d4bc8\x2d96ed\x2d7625c829d735-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dckw2j.mount: Deactivated successfully. Sep 13 00:53:43.842986 systemd[1]: var-lib-kubelet-pods-94b22157\x2da8c7\x2d4bc8\x2d96ed\x2d7625c829d735-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:53:44.022706 kubelet[2066]: I0913 00:53:44.022667 2066 scope.go:117] "RemoveContainer" containerID="354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece" Sep 13 00:53:44.028134 env[1309]: time="2025-09-13T00:53:44.027695518Z" level=info msg="RemoveContainer for \"354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece\"" Sep 13 00:53:44.030765 env[1309]: time="2025-09-13T00:53:44.030710969Z" level=info msg="RemoveContainer for \"354a7672e1ca05e7e122f8fa2fc18e2cc3949a1c0cc671032c36268f492caece\" returns successfully" Sep 13 00:53:44.078171 kubelet[2066]: E0913 00:53:44.078126 2066 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94b22157-a8c7-4bc8-96ed-7625c829d735" containerName="mount-cgroup" Sep 13 00:53:44.078494 kubelet[2066]: I0913 00:53:44.078473 2066 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b22157-a8c7-4bc8-96ed-7625c829d735" containerName="mount-cgroup" Sep 13 00:53:44.151955 kubelet[2066]: I0913 00:53:44.151736 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-host-proc-sys-kernel\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.152604 kubelet[2066]: I0913 00:53:44.152574 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-hostproc\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.152725 kubelet[2066]: I0913 00:53:44.152710 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-cilium-cgroup\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.152803 kubelet[2066]: I0913 00:53:44.152789 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-lib-modules\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.152885 kubelet[2066]: I0913 00:53:44.152867 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ed11ae7-0012-4882-9bb3-f094034dc562-cilium-config-path\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153377 kubelet[2066]: I0913 00:53:44.152991 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ed11ae7-0012-4882-9bb3-f094034dc562-hubble-tls\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153377 kubelet[2066]: I0913 00:53:44.153144 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-cilium-run\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153377 kubelet[2066]: I0913 00:53:44.153178 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-bpf-maps\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153509 kubelet[2066]: I0913 00:53:44.153443 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-etc-cni-netd\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153509 kubelet[2066]: I0913 00:53:44.153497 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-xtables-lock\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153599 kubelet[2066]: I0913 00:53:44.153579 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8ed11ae7-0012-4882-9bb3-f094034dc562-cilium-ipsec-secrets\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153648 kubelet[2066]: I0913 00:53:44.153613 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-host-proc-sys-net\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153686 kubelet[2066]: I0913 00:53:44.153663 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjw9\" (UniqueName: \"kubernetes.io/projected/8ed11ae7-0012-4882-9bb3-f094034dc562-kube-api-access-gtjw9\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153718 kubelet[2066]: I0913 00:53:44.153694 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ed11ae7-0012-4882-9bb3-f094034dc562-cni-path\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.153765 kubelet[2066]: I0913 00:53:44.153735 2066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ed11ae7-0012-4882-9bb3-f094034dc562-clustermesh-secrets\") pod \"cilium-s6t9d\" (UID: \"8ed11ae7-0012-4882-9bb3-f094034dc562\") " pod="kube-system/cilium-s6t9d" Sep 13 00:53:44.382480 kubelet[2066]: E0913 00:53:44.382438 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:44.384808 env[1309]: time="2025-09-13T00:53:44.384750559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s6t9d,Uid:8ed11ae7-0012-4882-9bb3-f094034dc562,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:44.398890 env[1309]: time="2025-09-13T00:53:44.398669386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:44.398890 env[1309]: time="2025-09-13T00:53:44.398718079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:44.398890 env[1309]: time="2025-09-13T00:53:44.398729205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:44.399156 env[1309]: time="2025-09-13T00:53:44.398957195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d pid=4037 runtime=io.containerd.runc.v2 Sep 13 00:53:44.467828 env[1309]: time="2025-09-13T00:53:44.467027443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s6t9d,Uid:8ed11ae7-0012-4882-9bb3-f094034dc562,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\"" Sep 13 00:53:44.468490 kubelet[2066]: E0913 00:53:44.468463 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:44.473850 env[1309]: time="2025-09-13T00:53:44.473805524Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:53:44.490725 env[1309]: time="2025-09-13T00:53:44.490635633Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"23235065dbd6f20a843f1cedd20f4f94a280f9b19ffc7c2a4ac4010ac8c32364\"" Sep 13 00:53:44.493299 env[1309]: time="2025-09-13T00:53:44.492128166Z" level=info msg="StartContainer for \"23235065dbd6f20a843f1cedd20f4f94a280f9b19ffc7c2a4ac4010ac8c32364\"" Sep 13 00:53:44.556426 env[1309]: time="2025-09-13T00:53:44.556236683Z" level=info msg="StartContainer for \"23235065dbd6f20a843f1cedd20f4f94a280f9b19ffc7c2a4ac4010ac8c32364\" returns successfully" Sep 13 00:53:44.598255 env[1309]: time="2025-09-13T00:53:44.598197004Z" level=info msg="shim disconnected" id=23235065dbd6f20a843f1cedd20f4f94a280f9b19ffc7c2a4ac4010ac8c32364 Sep 13 00:53:44.598255 env[1309]: time="2025-09-13T00:53:44.598248160Z" level=warning msg="cleaning up after shim disconnected" id=23235065dbd6f20a843f1cedd20f4f94a280f9b19ffc7c2a4ac4010ac8c32364 namespace=k8s.io Sep 13 00:53:44.598255 env[1309]: time="2025-09-13T00:53:44.598257849Z" level=info msg="cleaning up dead shim" Sep 13 00:53:44.609107 env[1309]: time="2025-09-13T00:53:44.609053800Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4118 runtime=io.containerd.runc.v2\n" Sep 13 00:53:44.833402 kubelet[2066]: I0913 00:53:44.832524 2066 setters.go:600] "Node became not ready" node="ci-3510.3.8-n-8fedea5c61" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:53:44Z","lastTransitionTime":"2025-09-13T00:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:53:45.026494 kubelet[2066]: E0913 00:53:45.026407 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:45.031800 env[1309]: time="2025-09-13T00:53:45.031737129Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:53:45.056608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256674805.mount: Deactivated successfully. Sep 13 00:53:45.059886 env[1309]: time="2025-09-13T00:53:45.059820973Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"702f7fdd6b30323717b153b3bf4937d9715c57b201893e3170b8a96206a21182\"" Sep 13 00:53:45.061206 env[1309]: time="2025-09-13T00:53:45.061161301Z" level=info msg="StartContainer for \"702f7fdd6b30323717b153b3bf4937d9715c57b201893e3170b8a96206a21182\"" Sep 13 00:53:45.134300 env[1309]: time="2025-09-13T00:53:45.134123480Z" level=info msg="StartContainer for \"702f7fdd6b30323717b153b3bf4937d9715c57b201893e3170b8a96206a21182\" returns successfully" Sep 13 00:53:45.166732 env[1309]: time="2025-09-13T00:53:45.166680897Z" level=info msg="shim disconnected" id=702f7fdd6b30323717b153b3bf4937d9715c57b201893e3170b8a96206a21182 Sep 13 00:53:45.166732 env[1309]: time="2025-09-13T00:53:45.166725963Z" level=warning msg="cleaning up after shim disconnected" id=702f7fdd6b30323717b153b3bf4937d9715c57b201893e3170b8a96206a21182 namespace=k8s.io Sep 13 00:53:45.166732 env[1309]: time="2025-09-13T00:53:45.166735182Z" level=info msg="cleaning up dead shim" Sep 13 00:53:45.176860 env[1309]: time="2025-09-13T00:53:45.176805372Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4179 runtime=io.containerd.runc.v2\n" Sep 13 00:53:45.640296 kubelet[2066]: I0913 00:53:45.640249 2066 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b22157-a8c7-4bc8-96ed-7625c829d735" path="/var/lib/kubelet/pods/94b22157-a8c7-4bc8-96ed-7625c829d735/volumes" Sep 13 00:53:45.846499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-702f7fdd6b30323717b153b3bf4937d9715c57b201893e3170b8a96206a21182-rootfs.mount: Deactivated successfully. Sep 13 00:53:46.042261 kubelet[2066]: E0913 00:53:46.042215 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:46.044792 env[1309]: time="2025-09-13T00:53:46.044744278Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:53:46.061734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479757710.mount: Deactivated successfully. Sep 13 00:53:46.076858 env[1309]: time="2025-09-13T00:53:46.076788698Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"75ef79e346473f7105af6f1a0edffd5336e34a0c48a88807b78a1d28d9f0ce0a\"" Sep 13 00:53:46.078267 env[1309]: time="2025-09-13T00:53:46.078210194Z" level=info msg="StartContainer for \"75ef79e346473f7105af6f1a0edffd5336e34a0c48a88807b78a1d28d9f0ce0a\"" Sep 13 00:53:46.160082 env[1309]: time="2025-09-13T00:53:46.159761169Z" level=info msg="StartContainer for \"75ef79e346473f7105af6f1a0edffd5336e34a0c48a88807b78a1d28d9f0ce0a\" returns successfully" Sep 13 00:53:46.191260 env[1309]: time="2025-09-13T00:53:46.191207610Z" level=info msg="shim disconnected" id=75ef79e346473f7105af6f1a0edffd5336e34a0c48a88807b78a1d28d9f0ce0a Sep 13 00:53:46.191586 env[1309]: time="2025-09-13T00:53:46.191564862Z" level=warning msg="cleaning up after shim disconnected" id=75ef79e346473f7105af6f1a0edffd5336e34a0c48a88807b78a1d28d9f0ce0a namespace=k8s.io Sep 13 00:53:46.191682 env[1309]: time="2025-09-13T00:53:46.191667196Z" level=info msg="cleaning up dead shim" Sep 13 00:53:46.202456 env[1309]: time="2025-09-13T00:53:46.202389537Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4235 runtime=io.containerd.runc.v2\n" Sep 13 00:53:46.637727 kubelet[2066]: E0913 00:53:46.637629 2066 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-xmmp6" podUID="0a2c40ce-b4b9-4693-bcd1-8279badae4c7" Sep 13 00:53:46.777002 kubelet[2066]: E0913 00:53:46.776934 2066 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:53:46.847257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75ef79e346473f7105af6f1a0edffd5336e34a0c48a88807b78a1d28d9f0ce0a-rootfs.mount: Deactivated successfully. Sep 13 00:53:47.048961 kubelet[2066]: E0913 00:53:47.048925 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:47.057302 env[1309]: time="2025-09-13T00:53:47.056965785Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:53:47.085978 env[1309]: time="2025-09-13T00:53:47.083375020Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f4401c97a143281022747fa543ca6dcaea44b4074d6cbd51cb797d4364bb6c3\"" Sep 13 00:53:47.084414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766665495.mount: Deactivated successfully. Sep 13 00:53:47.093070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498351464.mount: Deactivated successfully. Sep 13 00:53:47.094294 env[1309]: time="2025-09-13T00:53:47.094250927Z" level=info msg="StartContainer for \"7f4401c97a143281022747fa543ca6dcaea44b4074d6cbd51cb797d4364bb6c3\"" Sep 13 00:53:47.185206 env[1309]: time="2025-09-13T00:53:47.185146955Z" level=info msg="StartContainer for \"7f4401c97a143281022747fa543ca6dcaea44b4074d6cbd51cb797d4364bb6c3\" returns successfully" Sep 13 00:53:47.215573 env[1309]: time="2025-09-13T00:53:47.215524871Z" level=info msg="shim disconnected" id=7f4401c97a143281022747fa543ca6dcaea44b4074d6cbd51cb797d4364bb6c3 Sep 13 00:53:47.215863 env[1309]: time="2025-09-13T00:53:47.215843106Z" level=warning msg="cleaning up after shim disconnected" id=7f4401c97a143281022747fa543ca6dcaea44b4074d6cbd51cb797d4364bb6c3 namespace=k8s.io Sep 13 00:53:47.216019 env[1309]: time="2025-09-13T00:53:47.215989400Z" level=info msg="cleaning up dead shim" Sep 13 00:53:47.225775 env[1309]: time="2025-09-13T00:53:47.225723045Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4290 runtime=io.containerd.runc.v2\n" Sep 13 00:53:47.846601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f4401c97a143281022747fa543ca6dcaea44b4074d6cbd51cb797d4364bb6c3-rootfs.mount: Deactivated successfully. Sep 13 00:53:48.053422 kubelet[2066]: E0913 00:53:48.053391 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:48.055535 env[1309]: time="2025-09-13T00:53:48.055491844Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:53:48.074056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245782256.mount: Deactivated successfully. Sep 13 00:53:48.087237 env[1309]: time="2025-09-13T00:53:48.087190328Z" level=info msg="CreateContainer within sandbox \"6c857ce8fbf5f7422dc2d1a5ad73a645c47ae8d4d052dd3b45bf0ed1a016908d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bf73980c4818e4d96a2193191441e3e324e253e858824e752b84a74a1ae1d5f5\"" Sep 13 00:53:48.088555 env[1309]: time="2025-09-13T00:53:48.088517886Z" level=info msg="StartContainer for \"bf73980c4818e4d96a2193191441e3e324e253e858824e752b84a74a1ae1d5f5\"" Sep 13 00:53:48.099203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961653848.mount: Deactivated successfully. Sep 13 00:53:48.156468 env[1309]: time="2025-09-13T00:53:48.156396018Z" level=info msg="StartContainer for \"bf73980c4818e4d96a2193191441e3e324e253e858824e752b84a74a1ae1d5f5\" returns successfully" Sep 13 00:53:48.597948 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:53:48.638764 kubelet[2066]: E0913 00:53:48.638224 2066 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-xmmp6" podUID="0a2c40ce-b4b9-4693-bcd1-8279badae4c7" Sep 13 00:53:49.057896 kubelet[2066]: E0913 00:53:49.057860 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:50.384187 kubelet[2066]: E0913 00:53:50.384142 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:50.584616 systemd[1]: run-containerd-runc-k8s.io-bf73980c4818e4d96a2193191441e3e324e253e858824e752b84a74a1ae1d5f5-runc.mVAFdX.mount: Deactivated successfully. Sep 13 00:53:50.638075 kubelet[2066]: E0913 00:53:50.637928 2066 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-xmmp6" podUID="0a2c40ce-b4b9-4693-bcd1-8279badae4c7" Sep 13 00:53:51.654023 env[1309]: time="2025-09-13T00:53:51.653978819Z" level=info msg="StopPodSandbox for \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\"" Sep 13 00:53:51.654452 env[1309]: time="2025-09-13T00:53:51.654082944Z" level=info msg="TearDown network for sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" successfully" Sep 13 00:53:51.654452 env[1309]: time="2025-09-13T00:53:51.654114305Z" level=info msg="StopPodSandbox for \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" returns successfully" Sep 13 00:53:51.654939 env[1309]: time="2025-09-13T00:53:51.654453851Z" level=info msg="RemovePodSandbox for \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\"" Sep 13 00:53:51.654939 env[1309]: time="2025-09-13T00:53:51.654480305Z" level=info msg="Forcibly stopping sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\"" Sep 13 00:53:51.654939 env[1309]: time="2025-09-13T00:53:51.654547119Z" level=info msg="TearDown network for sandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" successfully" Sep 13 00:53:51.663821 env[1309]: time="2025-09-13T00:53:51.663738267Z" level=info msg="RemovePodSandbox \"f7797e0d4610290f63fdf943f0cd882d648b68949bdf824752c4fa5b626089f4\" returns successfully" Sep 13 00:53:51.664894 env[1309]: time="2025-09-13T00:53:51.664858936Z" level=info msg="StopPodSandbox for \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\"" Sep 13 00:53:51.665024 env[1309]: time="2025-09-13T00:53:51.664990520Z" level=info msg="TearDown network for sandbox \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\" successfully" Sep 13 00:53:51.665103 env[1309]: time="2025-09-13T00:53:51.665022290Z" level=info msg="StopPodSandbox for \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\" returns successfully" Sep 13 00:53:51.666586 env[1309]: time="2025-09-13T00:53:51.666553888Z" level=info msg="RemovePodSandbox for \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\"" Sep 13 00:53:51.666674 env[1309]: time="2025-09-13T00:53:51.666591882Z" level=info msg="Forcibly stopping sandbox \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\"" Sep 13 00:53:51.666710 env[1309]: time="2025-09-13T00:53:51.666681828Z" level=info msg="TearDown network for sandbox \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\" successfully" Sep 13 00:53:51.669610 env[1309]: time="2025-09-13T00:53:51.669540468Z" level=info msg="RemovePodSandbox \"4a7cc688a337ff5f506b8e353505ff205a9a8f10668a7195d50dbf2fe2c25a31\" returns successfully" Sep 13 00:53:51.670136 env[1309]: time="2025-09-13T00:53:51.670105003Z" level=info msg="StopPodSandbox for \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\"" Sep 13 00:53:51.670435 env[1309]: time="2025-09-13T00:53:51.670201894Z" level=info msg="TearDown network for sandbox \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\" successfully" Sep 13 00:53:51.670435 env[1309]: time="2025-09-13T00:53:51.670239297Z" level=info msg="StopPodSandbox for \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\" returns successfully" Sep 13 00:53:51.673223 env[1309]: time="2025-09-13T00:53:51.673176654Z" level=info msg="RemovePodSandbox for \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\"" Sep 13 00:53:51.673418 env[1309]: time="2025-09-13T00:53:51.673367644Z" level=info msg="Forcibly stopping sandbox \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\"" Sep 13 00:53:51.673589 env[1309]: time="2025-09-13T00:53:51.673567602Z" level=info msg="TearDown network for sandbox \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\" successfully" Sep 13 00:53:51.682640 env[1309]: time="2025-09-13T00:53:51.682591060Z" level=info msg="RemovePodSandbox \"83c64fa77196bab4faa45c93024225095ca75a3a50e8ee7a3f5424cff9d5db57\" returns successfully" Sep 13 00:53:51.737841 systemd-networkd[1062]: lxc_health: Link UP Sep 13 00:53:51.749983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:53:51.748360 systemd-networkd[1062]: lxc_health: Gained carrier Sep 13 00:53:52.384846 kubelet[2066]: E0913 00:53:52.384807 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:52.408267 kubelet[2066]: I0913 00:53:52.408196 2066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s6t9d" podStartSLOduration=8.408177264 podStartE2EDuration="8.408177264s" podCreationTimestamp="2025-09-13 00:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:49.076357988 +0000 UTC m=+117.659210624" watchObservedRunningTime="2025-09-13 00:53:52.408177264 +0000 UTC m=+120.991029901" Sep 13 00:53:52.637476 kubelet[2066]: E0913 00:53:52.637347 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:52.780838 systemd[1]: run-containerd-runc-k8s.io-bf73980c4818e4d96a2193191441e3e324e253e858824e752b84a74a1ae1d5f5-runc.jjys5w.mount: Deactivated successfully. Sep 13 00:53:52.984953 systemd-networkd[1062]: lxc_health: Gained IPv6LL Sep 13 00:53:53.067648 kubelet[2066]: E0913 00:53:53.067603 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:54.070400 kubelet[2066]: E0913 00:53:54.070358 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:53:55.125682 systemd[1]: run-containerd-runc-k8s.io-bf73980c4818e4d96a2193191441e3e324e253e858824e752b84a74a1ae1d5f5-runc.gE3zXP.mount: Deactivated successfully. Sep 13 00:53:57.262270 systemd[1]: run-containerd-runc-k8s.io-bf73980c4818e4d96a2193191441e3e324e253e858824e752b84a74a1ae1d5f5-runc.MCZMoo.mount: Deactivated successfully. Sep 13 00:53:57.330868 sshd[3871]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:57.334943 systemd[1]: sshd@27-143.110.227.187:22-147.75.109.163:56454.service: Deactivated successfully. Sep 13 00:53:57.335974 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:53:57.335980 systemd-logind[1297]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:53:57.337534 systemd-logind[1297]: Removed session 28.