Sep 6 00:15:49.998847 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:15:49.998877 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:15:49.998890 kernel: BIOS-provided physical RAM map: Sep 6 00:15:49.998898 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 00:15:49.998904 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 00:15:49.998911 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 00:15:49.998920 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 6 00:15:49.998930 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 6 00:15:49.998943 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 00:15:49.998954 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 00:15:49.998965 kernel: NX (Execute Disable) protection: active Sep 6 00:15:49.998976 kernel: SMBIOS 2.8 present. Sep 6 00:15:49.998987 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 6 00:15:49.998997 kernel: Hypervisor detected: KVM Sep 6 00:15:49.999011 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:15:49.999025 kernel: kvm-clock: cpu 0, msr 5e19f001, primary cpu clock Sep 6 00:15:49.999037 kernel: kvm-clock: using sched offset of 3839954934 cycles Sep 6 00:15:49.999050 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:15:49.999068 kernel: tsc: Detected 1995.307 MHz processor Sep 6 00:15:49.999082 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:15:49.999093 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:15:49.999105 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 6 00:15:49.999118 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:15:49.999131 kernel: ACPI: Early table checksum verification disabled Sep 6 00:15:49.999138 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 6 00:15:49.999149 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:15:49.999156 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:15:49.999164 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:15:49.999171 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 6 00:15:49.999179 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:15:49.999186 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:15:49.999193 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:15:49.999203 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:15:49.999211 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 6 00:15:49.999223 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 6 00:15:50.001317 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 6 00:15:50.001338 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 6 00:15:50.001350 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 6 00:15:50.001363 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 6 00:15:50.001374 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 6 00:15:50.001393 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:15:50.001402 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:15:50.001410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 00:15:50.001419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 6 00:15:50.001428 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 6 00:15:50.001436 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 6 00:15:50.001448 kernel: Zone ranges: Sep 6 00:15:50.001456 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:15:50.001464 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 6 00:15:50.001472 kernel: Normal empty Sep 6 00:15:50.001481 kernel: Movable zone start for each node Sep 6 00:15:50.001489 kernel: Early memory node ranges Sep 6 00:15:50.001497 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 00:15:50.001505 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 6 00:15:50.001513 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 6 00:15:50.001523 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:15:50.001539 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 00:15:50.001547 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 6 00:15:50.001558 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:15:50.001570 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:15:50.001582 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:15:50.001590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:15:50.001598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:15:50.001606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:15:50.001617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:15:50.001630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:15:50.001638 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:15:50.001646 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:15:50.001654 kernel: TSC deadline timer available Sep 6 00:15:50.001662 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:15:50.001673 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 6 00:15:50.001685 kernel: Booting paravirtualized kernel on KVM Sep 6 00:15:50.001698 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:15:50.001712 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:15:50.001725 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:15:50.001733 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:15:50.001741 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:15:50.001752 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Sep 6 00:15:50.001765 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 6 00:15:50.001778 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 6 00:15:50.001789 kernel: Policy zone: DMA32 Sep 6 00:15:50.001799 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:15:50.001811 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:15:50.001819 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:15:50.001827 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:15:50.001835 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:15:50.001844 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 123076K reserved, 0K cma-reserved) Sep 6 00:15:50.001852 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:15:50.001865 kernel: Kernel/User page tables isolation: enabled Sep 6 00:15:50.001878 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:15:50.001891 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:15:50.001902 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:15:50.001911 kernel: rcu: RCU event tracing is enabled. Sep 6 00:15:50.001919 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:15:50.001928 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:15:50.001938 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:15:50.001946 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:15:50.001955 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:15:50.001963 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:15:50.001973 kernel: random: crng init done Sep 6 00:15:50.001981 kernel: Console: colour VGA+ 80x25 Sep 6 00:15:50.001989 kernel: printk: console [tty0] enabled Sep 6 00:15:50.002018 kernel: printk: console [ttyS0] enabled Sep 6 00:15:50.002031 kernel: ACPI: Core revision 20210730 Sep 6 00:15:50.002044 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:15:50.002058 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:15:50.002069 kernel: x2apic enabled Sep 6 00:15:50.002081 kernel: Switched APIC routing to physical x2apic. Sep 6 00:15:50.002094 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:15:50.002105 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns Sep 6 00:15:50.002113 kernel: Calibrating delay loop (skipped) preset value.. 3990.61 BogoMIPS (lpj=1995307) Sep 6 00:15:50.002131 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 6 00:15:50.002139 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 6 00:15:50.002147 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:15:50.002159 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:15:50.002171 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:15:50.002179 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 6 00:15:50.002192 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:15:50.002213 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:15:50.002221 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 00:15:50.002253 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:15:50.002266 kernel: active return thunk: its_return_thunk Sep 6 00:15:50.002275 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:15:50.002283 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:15:50.002292 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:15:50.002300 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:15:50.002309 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:15:50.002327 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:15:50.002337 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:15:50.002345 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:15:50.002358 kernel: LSM: Security Framework initializing Sep 6 00:15:50.002368 kernel: SELinux: Initializing. Sep 6 00:15:50.002377 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:15:50.002386 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:15:50.002397 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 6 00:15:50.002405 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 6 00:15:50.002414 kernel: signal: max sigframe size: 1776 Sep 6 00:15:50.002423 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:15:50.002431 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:15:50.002439 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:15:50.002448 kernel: x86: Booting SMP configuration: Sep 6 00:15:50.002456 kernel: .... node #0, CPUs: #1 Sep 6 00:15:50.002464 kernel: kvm-clock: cpu 1, msr 5e19f041, secondary cpu clock Sep 6 00:15:50.002475 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Sep 6 00:15:50.002483 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:15:50.002492 kernel: smpboot: Max logical packages: 1 Sep 6 00:15:50.002502 kernel: smpboot: Total of 2 processors activated (7981.22 BogoMIPS) Sep 6 00:15:50.002514 kernel: devtmpfs: initialized Sep 6 00:15:50.002522 kernel: x86/mm: Memory block size: 128MB Sep 6 00:15:50.002531 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:15:50.002539 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:15:50.002548 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:15:50.002559 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:15:50.002567 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:15:50.002580 kernel: audit: type=2000 audit(1757117749.245:1): state=initialized audit_enabled=0 res=1 Sep 6 00:15:50.002594 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:15:50.002603 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:15:50.002611 kernel: cpuidle: using governor menu Sep 6 00:15:50.002626 kernel: ACPI: bus type PCI registered Sep 6 00:15:50.002640 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:15:50.002649 kernel: dca service started, version 1.12.1 Sep 6 00:15:50.002660 kernel: PCI: Using configuration type 1 for base access Sep 6 00:15:50.002669 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:15:50.002678 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:15:50.002686 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:15:50.002694 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:15:50.002703 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:15:50.002711 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:15:50.002719 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:15:50.002728 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:15:50.002738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:15:50.002747 kernel: ACPI: Interpreter enabled Sep 6 00:15:50.002756 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:15:50.002764 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:15:50.002773 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:15:50.002781 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 6 00:15:50.002790 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:15:50.003031 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:15:50.003136 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:15:50.003149 kernel: acpiphp: Slot [3] registered Sep 6 00:15:50.003157 kernel: acpiphp: Slot [4] registered Sep 6 00:15:50.003166 kernel: acpiphp: Slot [5] registered Sep 6 00:15:50.003174 kernel: acpiphp: Slot [6] registered Sep 6 00:15:50.003183 kernel: acpiphp: Slot [7] registered Sep 6 00:15:50.003191 kernel: acpiphp: Slot [8] registered Sep 6 00:15:50.003199 kernel: acpiphp: Slot [9] registered Sep 6 00:15:50.003207 kernel: acpiphp: Slot [10] registered Sep 6 00:15:50.003219 kernel: acpiphp: Slot [11] registered Sep 6 00:15:50.003227 kernel: acpiphp: Slot [12] registered Sep 6 00:15:50.005279 kernel: acpiphp: Slot [13] registered Sep 6 00:15:50.005307 kernel: acpiphp: Slot [14] registered Sep 6 00:15:50.005316 kernel: acpiphp: Slot [15] registered Sep 6 00:15:50.005325 kernel: acpiphp: Slot [16] registered Sep 6 00:15:50.005333 kernel: acpiphp: Slot [17] registered Sep 6 00:15:50.005342 kernel: acpiphp: Slot [18] registered Sep 6 00:15:50.005350 kernel: acpiphp: Slot [19] registered Sep 6 00:15:50.005364 kernel: acpiphp: Slot [20] registered Sep 6 00:15:50.005372 kernel: acpiphp: Slot [21] registered Sep 6 00:15:50.005380 kernel: acpiphp: Slot [22] registered Sep 6 00:15:50.005389 kernel: acpiphp: Slot [23] registered Sep 6 00:15:50.005397 kernel: acpiphp: Slot [24] registered Sep 6 00:15:50.005405 kernel: acpiphp: Slot [25] registered Sep 6 00:15:50.005414 kernel: acpiphp: Slot [26] registered Sep 6 00:15:50.005422 kernel: acpiphp: Slot [27] registered Sep 6 00:15:50.005430 kernel: acpiphp: Slot [28] registered Sep 6 00:15:50.005450 kernel: acpiphp: Slot [29] registered Sep 6 00:15:50.005473 kernel: acpiphp: Slot [30] registered Sep 6 00:15:50.005482 kernel: acpiphp: Slot [31] registered Sep 6 00:15:50.005491 kernel: PCI host bridge to bus 0000:00 Sep 6 00:15:50.005683 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:15:50.010485 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:15:50.010618 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:15:50.010724 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 6 00:15:50.010823 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 6 00:15:50.010920 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:15:50.011066 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:15:50.011192 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 6 00:15:50.011427 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 6 00:15:50.011562 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 6 00:15:50.011696 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 6 00:15:50.011798 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 6 00:15:50.011898 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 6 00:15:50.011993 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 6 00:15:50.012139 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 6 00:15:50.012273 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 6 00:15:50.012399 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 6 00:15:50.012518 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 6 00:15:50.012622 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 6 00:15:50.012768 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 6 00:15:50.012915 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 6 00:15:50.013068 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 6 00:15:50.013201 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 6 00:15:50.013321 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 6 00:15:50.013419 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:15:50.013534 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:15:50.013633 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 6 00:15:50.013747 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 6 00:15:50.013868 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 6 00:15:50.013986 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:15:50.014183 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 6 00:15:50.014315 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 6 00:15:50.014441 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 6 00:15:50.014595 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 6 00:15:50.014712 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 6 00:15:50.014814 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 6 00:15:50.014935 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 6 00:15:50.015059 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:15:50.015159 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 00:15:50.015322 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 6 00:15:50.015429 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 6 00:15:50.015569 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:15:50.015702 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 6 00:15:50.015821 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 6 00:15:50.015953 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 6 00:15:50.016102 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 6 00:15:50.016214 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 6 00:15:50.017434 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 6 00:15:50.017462 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:15:50.017476 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:15:50.017491 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:15:50.017511 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:15:50.017520 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:15:50.017530 kernel: iommu: Default domain type: Translated Sep 6 00:15:50.017538 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:15:50.017642 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 6 00:15:50.017774 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:15:50.017899 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 6 00:15:50.017912 kernel: vgaarb: loaded Sep 6 00:15:50.017921 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:15:50.017934 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:15:50.017943 kernel: PTP clock support registered Sep 6 00:15:50.017952 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:15:50.017960 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:15:50.017969 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 00:15:50.017982 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 6 00:15:50.018073 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:15:50.018087 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:15:50.018100 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:15:50.018112 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:15:50.018121 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:15:50.018134 kernel: pnp: PnP ACPI init Sep 6 00:15:50.018147 kernel: pnp: PnP ACPI: found 4 devices Sep 6 00:15:50.018158 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:15:50.018172 kernel: NET: Registered PF_INET protocol family Sep 6 00:15:50.018183 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:15:50.018192 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 00:15:50.018203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:15:50.018213 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:15:50.018223 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 00:15:50.021285 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 00:15:50.021309 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:15:50.021318 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:15:50.021327 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:15:50.021336 kernel: NET: Registered PF_XDP protocol family Sep 6 00:15:50.021547 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:15:50.021673 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:15:50.021773 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:15:50.021884 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 6 00:15:50.021979 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 6 00:15:50.022123 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 6 00:15:50.022284 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:15:50.022432 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 6 00:15:50.022453 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 6 00:15:50.022609 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 40457 usecs Sep 6 00:15:50.022628 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:15:50.022641 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:15:50.022655 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns Sep 6 00:15:50.022668 kernel: Initialise system trusted keyrings Sep 6 00:15:50.022683 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 00:15:50.022696 kernel: Key type asymmetric registered Sep 6 00:15:50.022710 kernel: Asymmetric key parser 'x509' registered Sep 6 00:15:50.022723 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:15:50.022743 kernel: io scheduler mq-deadline registered Sep 6 00:15:50.022756 kernel: io scheduler kyber registered Sep 6 00:15:50.022769 kernel: io scheduler bfq registered Sep 6 00:15:50.022782 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:15:50.022796 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 6 00:15:50.022809 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 6 00:15:50.022819 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 6 00:15:50.022828 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:15:50.022837 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:15:50.022851 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:15:50.022864 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:15:50.022877 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:15:50.022890 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:15:50.023123 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 6 00:15:50.023323 kernel: rtc_cmos 00:03: registered as rtc0 Sep 6 00:15:50.023478 kernel: rtc_cmos 00:03: setting system clock to 2025-09-06T00:15:49 UTC (1757117749) Sep 6 00:15:50.023611 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 6 00:15:50.023624 kernel: intel_pstate: CPU model not supported Sep 6 00:15:50.023633 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:15:50.023641 kernel: Segment Routing with IPv6 Sep 6 00:15:50.023650 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:15:50.023659 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:15:50.023667 kernel: Key type dns_resolver registered Sep 6 00:15:50.023677 kernel: IPI shorthand broadcast: enabled Sep 6 00:15:50.023685 kernel: sched_clock: Marking stable (759724882, 135540398)->(1055474159, -160208879) Sep 6 00:15:50.023694 kernel: registered taskstats version 1 Sep 6 00:15:50.023706 kernel: Loading compiled-in X.509 certificates Sep 6 00:15:50.023715 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:15:50.023723 kernel: Key type .fscrypt registered Sep 6 00:15:50.023732 kernel: Key type fscrypt-provisioning registered Sep 6 00:15:50.023744 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:15:50.023756 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:15:50.023767 kernel: ima: No architecture policies found Sep 6 00:15:50.023779 kernel: clk: Disabling unused clocks Sep 6 00:15:50.023794 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:15:50.023805 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:15:50.023819 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:15:50.023831 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:15:50.023841 kernel: Run /init as init process Sep 6 00:15:50.023855 kernel: with arguments: Sep 6 00:15:50.023891 kernel: /init Sep 6 00:15:50.023908 kernel: with environment: Sep 6 00:15:50.023920 kernel: HOME=/ Sep 6 00:15:50.023931 kernel: TERM=linux Sep 6 00:15:50.023940 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:15:50.023952 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:15:50.023965 systemd[1]: Detected virtualization kvm. Sep 6 00:15:50.023974 systemd[1]: Detected architecture x86-64. Sep 6 00:15:50.023983 systemd[1]: Running in initrd. Sep 6 00:15:50.023993 systemd[1]: No hostname configured, using default hostname. Sep 6 00:15:50.024002 systemd[1]: Hostname set to . Sep 6 00:15:50.024014 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:15:50.024023 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:15:50.024032 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:15:50.024041 systemd[1]: Reached target cryptsetup.target. Sep 6 00:15:50.024050 systemd[1]: Reached target paths.target. Sep 6 00:15:50.024059 systemd[1]: Reached target slices.target. Sep 6 00:15:50.024069 systemd[1]: Reached target swap.target. Sep 6 00:15:50.024078 systemd[1]: Reached target timers.target. Sep 6 00:15:50.024090 systemd[1]: Listening on iscsid.socket. Sep 6 00:15:50.024099 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:15:50.024110 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:15:50.024120 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:15:50.024129 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:15:50.024138 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:15:50.024147 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:15:50.024157 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:15:50.024168 systemd[1]: Reached target sockets.target. Sep 6 00:15:50.024178 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:15:50.024190 systemd[1]: Finished network-cleanup.service. Sep 6 00:15:50.024199 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:15:50.024208 systemd[1]: Starting systemd-journald.service... Sep 6 00:15:50.024220 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:15:50.024229 systemd[1]: Starting systemd-resolved.service... Sep 6 00:15:50.025316 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:15:50.025327 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:15:50.025337 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:15:50.025354 systemd-journald[184]: Journal started Sep 6 00:15:50.025419 systemd-journald[184]: Runtime Journal (/run/log/journal/69dd0b7dfaef476fa260189c6464e47f) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:15:50.000456 systemd-modules-load[185]: Inserted module 'overlay' Sep 6 00:15:50.066279 systemd[1]: Started systemd-journald.service. Sep 6 00:15:50.066321 kernel: audit: type=1130 audit(1757117750.054:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.066344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:15:50.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.031844 systemd-resolved[186]: Positive Trust Anchors: Sep 6 00:15:50.071201 kernel: audit: type=1130 audit(1757117750.066:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.031858 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:15:50.031913 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:15:50.077712 kernel: Bridge firewalling registered Sep 6 00:15:50.035707 systemd-resolved[186]: Defaulting to hostname 'linux'. Sep 6 00:15:50.092340 kernel: audit: type=1130 audit(1757117750.078:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.092377 kernel: audit: type=1130 audit(1757117750.083:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.067154 systemd[1]: Started systemd-resolved.service. Sep 6 00:15:50.075675 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 6 00:15:50.078751 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:15:50.088547 systemd[1]: Reached target nss-lookup.target. Sep 6 00:15:50.090455 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:15:50.092938 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:15:50.117318 kernel: SCSI subsystem initialized Sep 6 00:15:50.117394 kernel: audit: type=1130 audit(1757117750.111:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.110676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:15:50.126446 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:15:50.130091 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:15:50.130130 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:15:50.130158 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:15:50.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.135289 kernel: audit: type=1130 audit(1757117750.130:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.136069 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:15:50.138356 systemd-modules-load[185]: Inserted module 'dm_multipath' Sep 6 00:15:50.139598 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:15:50.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.155370 kernel: audit: type=1130 audit(1757117750.140:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.155758 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:15:50.171058 dracut-cmdline[204]: dracut-dracut-053 Sep 6 00:15:50.170228 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:15:50.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.176440 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:15:50.178816 kernel: audit: type=1130 audit(1757117750.170:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.262271 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:15:50.287273 kernel: iscsi: registered transport (tcp) Sep 6 00:15:50.316267 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:15:50.316336 kernel: QLogic iSCSI HBA Driver Sep 6 00:15:50.366573 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:15:50.368072 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:15:50.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.374317 kernel: audit: type=1130 audit(1757117750.366:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.432323 kernel: raid6: avx2x4 gen() 27484 MB/s Sep 6 00:15:50.449316 kernel: raid6: avx2x4 xor() 9464 MB/s Sep 6 00:15:50.467308 kernel: raid6: avx2x2 gen() 21892 MB/s Sep 6 00:15:50.485314 kernel: raid6: avx2x2 xor() 10102 MB/s Sep 6 00:15:50.503308 kernel: raid6: avx2x1 gen() 13964 MB/s Sep 6 00:15:50.521323 kernel: raid6: avx2x1 xor() 9000 MB/s Sep 6 00:15:50.539304 kernel: raid6: sse2x4 gen() 8238 MB/s Sep 6 00:15:50.557314 kernel: raid6: sse2x4 xor() 4138 MB/s Sep 6 00:15:50.575321 kernel: raid6: sse2x2 gen() 8131 MB/s Sep 6 00:15:50.593315 kernel: raid6: sse2x2 xor() 5340 MB/s Sep 6 00:15:50.610315 kernel: raid6: sse2x1 gen() 6341 MB/s Sep 6 00:15:50.628871 kernel: raid6: sse2x1 xor() 4204 MB/s Sep 6 00:15:50.628948 kernel: raid6: using algorithm avx2x4 gen() 27484 MB/s Sep 6 00:15:50.628961 kernel: raid6: .... xor() 9464 MB/s, rmw enabled Sep 6 00:15:50.630141 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:15:50.652298 kernel: xor: automatically using best checksumming function avx Sep 6 00:15:50.782312 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:15:50.796424 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:15:50.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.797000 audit: BPF prog-id=7 op=LOAD Sep 6 00:15:50.797000 audit: BPF prog-id=8 op=LOAD Sep 6 00:15:50.798735 systemd[1]: Starting systemd-udevd.service... Sep 6 00:15:50.820121 systemd-udevd[384]: Using default interface naming scheme 'v252'. Sep 6 00:15:50.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.827484 systemd[1]: Started systemd-udevd.service. Sep 6 00:15:50.831060 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:15:50.850018 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation Sep 6 00:15:50.893024 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:15:50.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:50.895558 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:15:50.953755 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:15:50.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:51.034424 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 6 00:15:51.093332 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:15:51.093363 kernel: GPT:9289727 != 125829119 Sep 6 00:15:51.093375 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:15:51.093387 kernel: GPT:9289727 != 125829119 Sep 6 00:15:51.093399 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:15:51.093409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:15:51.093422 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:15:51.093433 kernel: scsi host0: Virtio SCSI HBA Sep 6 00:15:51.097326 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 6 00:15:51.139330 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:15:51.144264 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (435) Sep 6 00:15:51.148603 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:15:51.152140 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:15:51.155323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:15:51.157636 systemd[1]: Starting disk-uuid.service... Sep 6 00:15:51.164526 disk-uuid[455]: Primary Header is updated. Sep 6 00:15:51.164526 disk-uuid[455]: Secondary Entries is updated. Sep 6 00:15:51.164526 disk-uuid[455]: Secondary Header is updated. Sep 6 00:15:51.173615 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:15:51.173679 kernel: AES CTR mode by8 optimization enabled Sep 6 00:15:51.179285 kernel: libata version 3.00 loaded. Sep 6 00:15:51.181974 kernel: ACPI: bus type USB registered Sep 6 00:15:51.182054 kernel: usbcore: registered new interface driver usbfs Sep 6 00:15:51.184726 kernel: usbcore: registered new interface driver hub Sep 6 00:15:51.184829 kernel: usbcore: registered new device driver usb Sep 6 00:15:51.188012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:15:51.200541 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 6 00:15:51.229268 kernel: ehci-pci: EHCI PCI platform driver Sep 6 00:15:51.235662 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 6 00:15:51.250630 kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 6 00:15:51.250663 kernel: scsi host1: ata_piix Sep 6 00:15:51.250946 kernel: scsi host2: ata_piix Sep 6 00:15:51.251264 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 6 00:15:51.251306 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 6 00:15:51.266267 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 6 00:15:51.267761 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 6 00:15:51.267889 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 6 00:15:51.267994 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Sep 6 00:15:51.268096 kernel: hub 1-0:1.0: USB hub found Sep 6 00:15:51.268270 kernel: hub 1-0:1.0: 2 ports detected Sep 6 00:15:52.175265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:15:52.175754 disk-uuid[456]: The operation has completed successfully. Sep 6 00:15:52.181302 kernel: block device autoloading is deprecated and will be removed. Sep 6 00:15:52.237050 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:15:52.238216 systemd[1]: Finished disk-uuid.service. Sep 6 00:15:52.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.240944 systemd[1]: Starting verity-setup.service... Sep 6 00:15:52.265385 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:15:52.325024 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:15:52.327333 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:15:52.329041 systemd[1]: Finished verity-setup.service. Sep 6 00:15:52.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.437288 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:15:52.437967 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:15:52.439531 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:15:52.441494 systemd[1]: Starting ignition-setup.service... Sep 6 00:15:52.443181 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:15:52.461292 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:15:52.461375 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:15:52.461407 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:15:52.479696 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:15:52.487373 systemd[1]: Finished ignition-setup.service. Sep 6 00:15:52.488931 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:15:52.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.592147 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:15:52.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.594000 audit: BPF prog-id=9 op=LOAD Sep 6 00:15:52.597437 systemd[1]: Starting systemd-networkd.service... Sep 6 00:15:52.625930 systemd-networkd[689]: lo: Link UP Sep 6 00:15:52.627184 systemd-networkd[689]: lo: Gained carrier Sep 6 00:15:52.628825 systemd-networkd[689]: Enumeration completed Sep 6 00:15:52.629811 systemd[1]: Started systemd-networkd.service. Sep 6 00:15:52.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.630366 systemd-networkd[689]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:15:52.631365 systemd[1]: Reached target network.target. Sep 6 00:15:52.632465 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 6 00:15:52.633744 systemd-networkd[689]: eth1: Link UP Sep 6 00:15:52.633749 systemd-networkd[689]: eth1: Gained carrier Sep 6 00:15:52.636630 systemd[1]: Starting iscsiuio.service... Sep 6 00:15:52.653762 systemd-networkd[689]: eth0: Link UP Sep 6 00:15:52.653778 systemd-networkd[689]: eth0: Gained carrier Sep 6 00:15:52.670371 systemd[1]: Started iscsiuio.service. Sep 6 00:15:52.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.675757 ignition[613]: Ignition 2.14.0 Sep 6 00:15:52.672569 systemd[1]: Starting iscsid.service... Sep 6 00:15:52.675770 ignition[613]: Stage: fetch-offline Sep 6 00:15:52.674438 systemd-networkd[689]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 Sep 6 00:15:52.675879 ignition[613]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:15:52.678509 systemd-networkd[689]: eth0: DHCPv4 address 146.190.126.13/20, gateway 146.190.112.1 acquired from 169.254.169.253 Sep 6 00:15:52.686766 iscsid[694]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:15:52.686766 iscsid[694]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:15:52.686766 iscsid[694]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:15:52.686766 iscsid[694]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:15:52.686766 iscsid[694]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:15:52.686766 iscsid[694]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:15:52.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.675910 ignition[613]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:15:52.685747 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:15:52.683450 ignition[613]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:15:52.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.697286 systemd[1]: Started iscsid.service. Sep 6 00:15:52.683698 ignition[613]: parsed url from cmdline: "" Sep 6 00:15:52.698874 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:15:52.683706 ignition[613]: no config URL provided Sep 6 00:15:52.702905 systemd[1]: Starting ignition-fetch.service... Sep 6 00:15:52.683714 ignition[613]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:15:52.716127 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:15:52.683726 ignition[613]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:15:52.717758 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:15:52.683733 ignition[613]: failed to fetch config: resource requires networking Sep 6 00:15:52.719419 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:15:52.684490 ignition[613]: Ignition finished successfully Sep 6 00:15:52.720788 systemd[1]: Reached target remote-fs.target. Sep 6 00:15:52.729124 ignition[696]: Ignition 2.14.0 Sep 6 00:15:52.723204 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:15:52.729133 ignition[696]: Stage: fetch Sep 6 00:15:52.729363 ignition[696]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:15:52.729387 ignition[696]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:15:52.732387 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:15:52.732626 ignition[696]: parsed url from cmdline: "" Sep 6 00:15:52.732634 ignition[696]: no config URL provided Sep 6 00:15:52.732644 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:15:52.732659 ignition[696]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:15:52.732708 ignition[696]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 6 00:15:52.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.748545 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:15:52.751819 ignition[696]: GET result: OK Sep 6 00:15:52.751954 ignition[696]: parsing config with SHA512: bb4243d1047348c3a17a13df1cc25ca3fb3d1095676434454aa7da0062c121ea3a3b355eec37b6c033cb13910916d7b60679c3d23f2f3bf9f2ce2a93a386e7b6 Sep 6 00:15:52.765432 unknown[696]: fetched base config from "system" Sep 6 00:15:52.766770 unknown[696]: fetched base config from "system" Sep 6 00:15:52.767691 unknown[696]: fetched user config from "digitalocean" Sep 6 00:15:52.768860 ignition[696]: fetch: fetch complete Sep 6 00:15:52.768875 ignition[696]: fetch: fetch passed Sep 6 00:15:52.768963 ignition[696]: Ignition finished successfully Sep 6 00:15:52.770489 systemd[1]: Finished ignition-fetch.service. Sep 6 00:15:52.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.772508 systemd[1]: Starting ignition-kargs.service... Sep 6 00:15:52.792867 ignition[714]: Ignition 2.14.0 Sep 6 00:15:52.793829 ignition[714]: Stage: kargs Sep 6 00:15:52.794646 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:15:52.795495 ignition[714]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:15:52.798495 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:15:52.800996 ignition[714]: kargs: kargs passed Sep 6 00:15:52.801092 ignition[714]: Ignition finished successfully Sep 6 00:15:52.802573 systemd[1]: Finished ignition-kargs.service. Sep 6 00:15:52.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.804354 systemd[1]: Starting ignition-disks.service... Sep 6 00:15:52.824491 ignition[720]: Ignition 2.14.0 Sep 6 00:15:52.824507 ignition[720]: Stage: disks Sep 6 00:15:52.824684 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:15:52.824706 ignition[720]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:15:52.826495 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:15:52.827571 ignition[720]: disks: disks passed Sep 6 00:15:52.828996 systemd[1]: Finished ignition-disks.service. Sep 6 00:15:52.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.827623 ignition[720]: Ignition finished successfully Sep 6 00:15:52.829758 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:15:52.830691 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:15:52.831744 systemd[1]: Reached target local-fs.target. Sep 6 00:15:52.832868 systemd[1]: Reached target sysinit.target. Sep 6 00:15:52.833870 systemd[1]: Reached target basic.target. Sep 6 00:15:52.836029 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:15:52.855881 systemd-fsck[727]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:15:52.859969 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:15:52.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:52.862578 systemd[1]: Mounting sysroot.mount... Sep 6 00:15:52.880262 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:15:52.881671 systemd[1]: Mounted sysroot.mount. Sep 6 00:15:52.882704 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:15:52.886066 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:15:52.889029 systemd[1]: Starting flatcar-digitalocean-network.service... Sep 6 00:15:52.892176 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 00:15:52.893765 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:15:52.893836 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:15:52.900307 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:15:52.903802 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:15:52.916328 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:15:52.933439 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:15:52.949781 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:15:52.965926 initrd-setup-root[767]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:15:53.052530 coreos-metadata[733]: Sep 06 00:15:53.052 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:15:53.060663 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:15:53.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:53.062515 systemd[1]: Starting ignition-mount.service... Sep 6 00:15:53.064121 systemd[1]: Starting sysroot-boot.service... Sep 6 00:15:53.072325 coreos-metadata[733]: Sep 06 00:15:53.070 INFO Fetch successful Sep 6 00:15:53.079023 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 6 00:15:53.079147 systemd[1]: Finished flatcar-digitalocean-network.service. Sep 6 00:15:53.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:53.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:53.085221 bash[784]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:15:53.102262 coreos-metadata[734]: Sep 06 00:15:53.102 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:15:53.110516 ignition[786]: INFO : Ignition 2.14.0 Sep 6 00:15:53.111969 ignition[786]: INFO : Stage: mount Sep 6 00:15:53.112875 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:15:53.113803 ignition[786]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:15:53.117769 coreos-metadata[734]: Sep 06 00:15:53.116 INFO Fetch successful Sep 6 00:15:53.117782 systemd[1]: Finished sysroot-boot.service. Sep 6 00:15:53.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:53.123525 coreos-metadata[734]: Sep 06 00:15:53.123 INFO wrote hostname ci-3510.3.8-n-27671cbf1d to /sysroot/etc/hostname Sep 6 00:15:53.124535 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:15:53.125281 ignition[786]: INFO : mount: mount passed Sep 6 00:15:53.125281 ignition[786]: INFO : Ignition finished successfully Sep 6 00:15:53.126948 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 00:15:53.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:53.129333 systemd[1]: Finished ignition-mount.service. Sep 6 00:15:53.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:53.349075 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:15:53.359303 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (793) Sep 6 00:15:53.362864 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:15:53.362972 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:15:53.362993 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:15:53.370561 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:15:53.380998 systemd[1]: Starting ignition-files.service... Sep 6 00:15:53.404368 ignition[813]: INFO : Ignition 2.14.0 Sep 6 00:15:53.404368 ignition[813]: INFO : Stage: files Sep 6 00:15:53.406507 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:15:53.406507 ignition[813]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:15:53.409198 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:15:53.410268 ignition[813]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:15:53.411186 ignition[813]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:15:53.411186 ignition[813]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:15:53.414823 ignition[813]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:15:53.415805 ignition[813]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:15:53.417706 unknown[813]: wrote ssh authorized keys file for user: core Sep 6 00:15:53.418772 ignition[813]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:15:53.419689 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 6 00:15:53.419689 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 6 00:15:53.568015 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:15:53.744435 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 6 00:15:53.745904 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:15:53.746997 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:15:53.818481 systemd-networkd[689]: eth0: Gained IPv6LL Sep 6 00:15:53.905827 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:15:54.446583 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:15:54.448383 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 6 00:15:54.458526 systemd-networkd[689]: eth1: Gained IPv6LL Sep 6 00:15:54.889282 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:15:57.306344 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:15:57.308107 ignition[813]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:15:57.308107 ignition[813]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:15:57.308107 ignition[813]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Sep 6 00:15:57.308107 ignition[813]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:15:57.313475 ignition[813]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:15:57.313475 ignition[813]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Sep 6 00:15:57.313475 ignition[813]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:15:57.313475 ignition[813]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:15:57.313475 ignition[813]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:15:57.313475 ignition[813]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:15:57.333540 kernel: kauditd_printk_skb: 28 callbacks suppressed Sep 6 00:15:57.334583 kernel: audit: type=1130 audit(1757117757.319:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.317160 systemd[1]: Finished ignition-files.service. Sep 6 00:15:57.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.341328 kernel: audit: type=1130 audit(1757117757.336:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.341356 ignition[813]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:15:57.341356 ignition[813]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:15:57.341356 ignition[813]: INFO : files: files passed Sep 6 00:15:57.341356 ignition[813]: INFO : Ignition finished successfully Sep 6 00:15:57.354200 kernel: audit: type=1131 audit(1757117757.341:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.354272 kernel: audit: type=1130 audit(1757117757.346:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.321708 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:15:57.328283 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:15:57.356744 initrd-setup-root-after-ignition[838]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:15:57.329848 systemd[1]: Starting ignition-quench.service... Sep 6 00:15:57.335792 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:15:57.335945 systemd[1]: Finished ignition-quench.service. Sep 6 00:15:57.341556 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:15:57.347180 systemd[1]: Reached target ignition-complete.target. Sep 6 00:15:57.354647 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:15:57.378147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:15:57.379336 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:15:57.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.381520 systemd[1]: Reached target initrd-fs.target. Sep 6 00:15:57.390010 kernel: audit: type=1130 audit(1757117757.380:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.390048 kernel: audit: type=1131 audit(1757117757.380:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.389346 systemd[1]: Reached target initrd.target. Sep 6 00:15:57.390612 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:15:57.391886 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:15:57.408652 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:15:57.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.416272 kernel: audit: type=1130 audit(1757117757.409:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.415168 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:15:57.428743 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:15:57.430548 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:15:57.432203 systemd[1]: Stopped target timers.target. Sep 6 00:15:57.432798 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:15:57.448312 kernel: audit: type=1131 audit(1757117757.433:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.432927 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:15:57.434418 systemd[1]: Stopped target initrd.target. Sep 6 00:15:57.448856 systemd[1]: Stopped target basic.target. Sep 6 00:15:57.450128 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:15:57.451140 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:15:57.452214 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:15:57.453364 systemd[1]: Stopped target remote-fs.target. Sep 6 00:15:57.454657 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:15:57.455809 systemd[1]: Stopped target sysinit.target. Sep 6 00:15:57.456952 systemd[1]: Stopped target local-fs.target. Sep 6 00:15:57.458087 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:15:57.459185 systemd[1]: Stopped target swap.target. Sep 6 00:15:57.466473 kernel: audit: type=1131 audit(1757117757.460:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.460146 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:15:57.460348 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:15:57.473340 kernel: audit: type=1131 audit(1757117757.467:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.461316 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:15:57.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.467114 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:15:57.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.467464 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:15:57.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.468383 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:15:57.468666 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:15:57.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.474170 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:15:57.474351 systemd[1]: Stopped ignition-files.service. Sep 6 00:15:57.475089 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 00:15:57.475274 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 00:15:57.477408 systemd[1]: Stopping ignition-mount.service... Sep 6 00:15:57.479570 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:15:57.480128 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:15:57.480357 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:15:57.481131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:15:57.481272 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:15:57.484842 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:15:57.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.496353 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:15:57.506269 ignition[851]: INFO : Ignition 2.14.0 Sep 6 00:15:57.506269 ignition[851]: INFO : Stage: umount Sep 6 00:15:57.506269 ignition[851]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:15:57.506269 ignition[851]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:15:57.516737 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:15:57.516737 ignition[851]: INFO : umount: umount passed Sep 6 00:15:57.516737 ignition[851]: INFO : Ignition finished successfully Sep 6 00:15:57.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.509723 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:15:57.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.514998 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:15:57.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.515098 systemd[1]: Stopped ignition-mount.service. Sep 6 00:15:57.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.519441 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:15:57.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.519572 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:15:57.520409 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:15:57.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.520470 systemd[1]: Stopped ignition-disks.service. Sep 6 00:15:57.521535 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:15:57.521597 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:15:57.522712 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:15:57.522756 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:15:57.523754 systemd[1]: Stopped target network.target. Sep 6 00:15:57.525076 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:15:57.525149 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:15:57.526323 systemd[1]: Stopped target paths.target. Sep 6 00:15:57.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.527650 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:15:57.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.529590 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:15:57.530597 systemd[1]: Stopped target slices.target. Sep 6 00:15:57.531915 systemd[1]: Stopped target sockets.target. Sep 6 00:15:57.533181 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:15:57.533277 systemd[1]: Closed iscsid.socket. Sep 6 00:15:57.534410 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:15:57.534456 systemd[1]: Closed iscsiuio.socket. Sep 6 00:15:57.535734 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:15:57.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.535829 systemd[1]: Stopped ignition-setup.service. Sep 6 00:15:57.536914 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:15:57.536962 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:15:57.538265 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:15:57.550000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:15:57.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.539842 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:15:57.544336 systemd-networkd[689]: eth0: DHCPv6 lease lost Sep 6 00:15:57.546568 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:15:57.546705 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:15:57.548347 systemd-networkd[689]: eth1: DHCPv6 lease lost Sep 6 00:15:57.549760 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:15:57.549900 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:15:57.556000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:15:57.551710 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:15:57.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.551768 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:15:57.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.554757 systemd[1]: Stopping network-cleanup.service... Sep 6 00:15:57.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.555465 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:15:57.555574 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:15:57.559403 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:15:57.559467 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:15:57.560882 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:15:57.560939 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:15:57.561910 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:15:57.564683 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:15:57.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.567737 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:15:57.567973 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:15:57.569128 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:15:57.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.569185 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:15:57.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.570067 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:15:57.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.570109 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:15:57.573750 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:15:57.573818 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:15:57.575051 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:15:57.575111 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:15:57.576504 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:15:57.576576 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:15:57.578972 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:15:57.586896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:15:57.586989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 00:15:57.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.589516 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:15:57.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.589597 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:15:57.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.590420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:15:57.590473 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:15:57.598837 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 00:15:57.599516 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:15:57.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.599645 systemd[1]: Stopped network-cleanup.service. Sep 6 00:15:57.601200 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:15:57.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.601371 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:15:57.602831 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:15:57.605013 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:15:57.619736 systemd[1]: Switching root. Sep 6 00:15:57.649388 iscsid[694]: iscsid shutting down. Sep 6 00:15:57.650278 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Sep 6 00:15:57.650357 systemd-journald[184]: Journal stopped Sep 6 00:16:02.493573 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:16:02.493661 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:16:02.493683 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:16:02.493700 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:16:02.493717 kernel: SELinux: policy capability open_perms=1 Sep 6 00:16:02.493736 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:16:02.493755 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:16:02.493783 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:16:02.493796 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:16:02.493809 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:16:02.493821 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:16:02.493838 systemd[1]: Successfully loaded SELinux policy in 53.847ms. Sep 6 00:16:02.493869 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.445ms. Sep 6 00:16:02.493882 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:16:02.493896 systemd[1]: Detected virtualization kvm. Sep 6 00:16:02.493908 systemd[1]: Detected architecture x86-64. Sep 6 00:16:02.493921 systemd[1]: Detected first boot. Sep 6 00:16:02.493954 systemd[1]: Hostname set to . Sep 6 00:16:02.493968 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:16:02.493985 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:16:02.493997 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:16:02.494011 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:16:02.494026 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:16:02.494040 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:16:02.494054 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:16:02.494068 systemd[1]: Stopped iscsiuio.service. Sep 6 00:16:02.494092 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:16:02.494107 systemd[1]: Stopped iscsid.service. Sep 6 00:16:02.494121 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:16:02.494133 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:16:02.494146 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:16:02.494159 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:16:02.494172 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:16:02.494185 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:16:02.494207 systemd[1]: Created slice system-getty.slice. Sep 6 00:16:02.494245 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:16:02.494269 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:16:02.494289 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:16:02.494307 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:16:02.494326 systemd[1]: Created slice user.slice. Sep 6 00:16:02.494346 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:16:02.494368 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:16:02.494381 systemd[1]: Set up automount boot.automount. Sep 6 00:16:02.494394 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:16:02.494406 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:16:02.494418 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:16:02.494430 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:16:02.494442 systemd[1]: Reached target integritysetup.target. Sep 6 00:16:02.494455 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:16:02.494467 systemd[1]: Reached target remote-fs.target. Sep 6 00:16:02.494504 systemd[1]: Reached target slices.target. Sep 6 00:16:02.494518 systemd[1]: Reached target swap.target. Sep 6 00:16:02.494531 systemd[1]: Reached target torcx.target. Sep 6 00:16:02.494544 systemd[1]: Reached target veritysetup.target. Sep 6 00:16:02.494556 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:16:02.494569 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:16:02.494582 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:16:02.494593 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:16:02.494606 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:16:02.494620 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:16:02.494640 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:16:02.494653 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:16:02.494706 systemd[1]: Mounting media.mount... Sep 6 00:16:02.494724 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:02.494743 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:16:02.494758 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:16:02.494772 systemd[1]: Mounting tmp.mount... Sep 6 00:16:02.494784 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:16:02.494809 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:02.494831 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:16:02.494851 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:16:02.494870 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:02.494884 systemd[1]: Starting modprobe@drm.service... Sep 6 00:16:02.494898 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:02.494913 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:16:02.494933 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:02.494970 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:16:02.494989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:16:02.495012 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:16:02.495034 kernel: kauditd_printk_skb: 66 callbacks suppressed Sep 6 00:16:02.495052 kernel: audit: type=1131 audit(1757117762.338:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.495070 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:16:02.495095 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:16:02.495119 kernel: audit: type=1131 audit(1757117762.352:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.495131 systemd[1]: Stopped systemd-journald.service. Sep 6 00:16:02.495143 systemd[1]: Starting systemd-journald.service... Sep 6 00:16:02.495162 kernel: audit: type=1130 audit(1757117762.369:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.495176 kernel: audit: type=1131 audit(1757117762.369:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.495189 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:16:02.495202 kernel: audit: type=1334 audit(1757117762.370:112): prog-id=18 op=LOAD Sep 6 00:16:02.495215 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:16:02.495227 kernel: audit: type=1334 audit(1757117762.371:113): prog-id=19 op=LOAD Sep 6 00:16:02.495266 kernel: audit: type=1334 audit(1757117762.371:114): prog-id=20 op=LOAD Sep 6 00:16:02.495278 kernel: audit: type=1334 audit(1757117762.371:115): prog-id=16 op=UNLOAD Sep 6 00:16:02.495293 kernel: audit: type=1334 audit(1757117762.371:116): prog-id=17 op=UNLOAD Sep 6 00:16:02.495305 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:16:02.495318 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:16:02.495335 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:16:02.495348 systemd[1]: Stopped verity-setup.service. Sep 6 00:16:02.495361 kernel: audit: type=1131 audit(1757117762.421:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.495379 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:02.495398 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:16:02.495418 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:16:02.495438 systemd[1]: Mounted media.mount. Sep 6 00:16:02.495460 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:16:02.495473 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:16:02.495486 systemd[1]: Mounted tmp.mount. Sep 6 00:16:02.495498 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:16:02.495513 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:16:02.495526 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:16:02.495540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:02.495556 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:02.495568 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:16:02.495580 systemd[1]: Finished modprobe@drm.service. Sep 6 00:16:02.495605 kernel: loop: module loaded Sep 6 00:16:02.495617 kernel: fuse: init (API version 7.34) Sep 6 00:16:02.495628 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:02.495644 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:02.495657 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:16:02.495671 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:16:02.495686 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:02.495698 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:02.495723 systemd-journald[958]: Journal started Sep 6 00:16:02.495824 systemd-journald[958]: Runtime Journal (/run/log/journal/69dd0b7dfaef476fa260189c6464e47f) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:15:57.796000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:15:57.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:15:57.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:15:57.863000 audit: BPF prog-id=10 op=LOAD Sep 6 00:15:57.863000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:15:57.863000 audit: BPF prog-id=11 op=LOAD Sep 6 00:15:57.863000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:15:57.987000 audit[884]: AVC avc: denied { associate } for pid=884 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:15:57.987000 audit[884]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d88c a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=867 pid=884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:15:57.987000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:15:57.990000 audit[884]: AVC avc: denied { associate } for pid=884 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:15:57.990000 audit[884]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d965 a2=1ed a3=0 items=2 ppid=867 pid=884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:15:57.990000 audit: CWD cwd="/" Sep 6 00:15:57.990000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:15:57.990000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:15:57.990000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:16:02.183000 audit: BPF prog-id=12 op=LOAD Sep 6 00:16:02.183000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:16:02.183000 audit: BPF prog-id=13 op=LOAD Sep 6 00:16:02.183000 audit: BPF prog-id=14 op=LOAD Sep 6 00:16:02.183000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:16:02.183000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:16:02.185000 audit: BPF prog-id=15 op=LOAD Sep 6 00:16:02.185000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:16:02.185000 audit: BPF prog-id=16 op=LOAD Sep 6 00:16:02.185000 audit: BPF prog-id=17 op=LOAD Sep 6 00:16:02.185000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:16:02.185000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:16:02.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.191000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:16:02.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.370000 audit: BPF prog-id=18 op=LOAD Sep 6 00:16:02.371000 audit: BPF prog-id=19 op=LOAD Sep 6 00:16:02.371000 audit: BPF prog-id=20 op=LOAD Sep 6 00:16:02.499442 systemd[1]: Started systemd-journald.service. Sep 6 00:16:02.371000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:16:02.371000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:16:02.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.491000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:16:02.491000 audit[958]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc28fc770 a2=4000 a3=7ffdc28fc80c items=0 ppid=1 pid=958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:16:02.491000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:16:02.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.180991 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:16:02.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:15:57.984332 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:16:02.181013 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:15:57.984768 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:16:02.186498 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:15:57.984791 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:16:02.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.500677 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:15:57.984826 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:16:02.504588 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:15:57.984837 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:16:02.505994 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:15:57.984873 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:16:02.507145 systemd[1]: Reached target network-pre.target. Sep 6 00:15:57.984887 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:15:57.985097 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:15:57.985142 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:15:57.985158 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:15:57.986736 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:15:57.986801 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:15:57.986836 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:15:57.986862 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:15:57.986894 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:16:02.509347 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:15:57.986918 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:15:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:16:02.515437 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:16:01.600657 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:16:01Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:16:01.601128 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:16:01Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:16:01.601381 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:16:01Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:16:01.601703 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:16:01Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:16:01.601797 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:16:01Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:16:01.601920 /usr/lib/systemd/system-generators/torcx-generator[884]: time="2025-09-06T00:16:01Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:16:02.516885 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:16:02.523419 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:16:02.527109 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:16:02.527984 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:02.530007 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:16:02.531460 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:16:02.533104 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:16:02.538219 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:16:02.541137 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:16:02.552710 systemd-journald[958]: Time spent on flushing to /var/log/journal/69dd0b7dfaef476fa260189c6464e47f is 90.155ms for 1161 entries. Sep 6 00:16:02.552710 systemd-journald[958]: System Journal (/var/log/journal/69dd0b7dfaef476fa260189c6464e47f) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:16:02.666514 systemd-journald[958]: Received client request to flush runtime journal. Sep 6 00:16:02.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.556117 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:16:02.557153 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:16:02.589281 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:16:02.609096 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:16:02.611450 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:16:02.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.648631 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:16:02.650869 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:16:02.668124 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:16:02.685217 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:16:02.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.688086 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:16:02.704059 udevadm[997]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:16:02.725475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:16:02.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.382084 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:16:03.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.383000 audit: BPF prog-id=21 op=LOAD Sep 6 00:16:03.383000 audit: BPF prog-id=22 op=LOAD Sep 6 00:16:03.383000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:16:03.383000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:16:03.384664 systemd[1]: Starting systemd-udevd.service... Sep 6 00:16:03.412821 systemd-udevd[998]: Using default interface naming scheme 'v252'. Sep 6 00:16:03.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.447000 audit: BPF prog-id=23 op=LOAD Sep 6 00:16:03.445673 systemd[1]: Started systemd-udevd.service. Sep 6 00:16:03.451981 systemd[1]: Starting systemd-networkd.service... Sep 6 00:16:03.459000 audit: BPF prog-id=24 op=LOAD Sep 6 00:16:03.460000 audit: BPF prog-id=25 op=LOAD Sep 6 00:16:03.460000 audit: BPF prog-id=26 op=LOAD Sep 6 00:16:03.461833 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:16:03.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.540730 systemd[1]: Started systemd-userdbd.service. Sep 6 00:16:03.581846 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:03.582166 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:03.584007 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:03.588594 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:03.592589 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:03.593566 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:16:03.593740 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:16:03.593896 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:03.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.596814 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:03.597064 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:03.599697 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:03.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.603533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:03.603752 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:03.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.610920 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:03.611188 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:03.612323 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:16:03.635036 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:16:03.663446 systemd-networkd[1005]: lo: Link UP Sep 6 00:16:03.663460 systemd-networkd[1005]: lo: Gained carrier Sep 6 00:16:03.665132 systemd-networkd[1005]: Enumeration completed Sep 6 00:16:03.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:03.665385 systemd[1]: Started systemd-networkd.service. Sep 6 00:16:03.665404 systemd-networkd[1005]: eth1: Configuring with /run/systemd/network/10-02:a5:f2:77:84:a6.network. Sep 6 00:16:03.667409 systemd-networkd[1005]: eth0: Configuring with /run/systemd/network/10-8e:ff:30:ba:4b:2d.network. Sep 6 00:16:03.668567 systemd-networkd[1005]: eth1: Link UP Sep 6 00:16:03.668580 systemd-networkd[1005]: eth1: Gained carrier Sep 6 00:16:03.673694 systemd-networkd[1005]: eth0: Link UP Sep 6 00:16:03.673707 systemd-networkd[1005]: eth0: Gained carrier Sep 6 00:16:03.706279 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:16:03.735321 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:16:03.757000 audit[1006]: AVC avc: denied { confidentiality } for pid=1006 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:16:03.770273 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:16:03.757000 audit[1006]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5586e64a29e0 a1=338ec a2=7f788e39dbc5 a3=5 items=110 ppid=998 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:16:03.757000 audit: CWD cwd="/" Sep 6 00:16:03.757000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=1 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=2 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=3 name=(null) inode=14619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=4 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=5 name=(null) inode=14620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=6 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=7 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=8 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=9 name=(null) inode=14622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=10 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=11 name=(null) inode=14623 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=12 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=13 name=(null) inode=14624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=14 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=15 name=(null) inode=14625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=16 name=(null) inode=14621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=17 name=(null) inode=14626 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=18 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=19 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=20 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=21 name=(null) inode=14628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=22 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=23 name=(null) inode=14629 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=24 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=25 name=(null) inode=14630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=26 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=27 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=28 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=29 name=(null) inode=14632 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=30 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=31 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=32 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=33 name=(null) inode=14634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=34 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=35 name=(null) inode=14635 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=36 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=37 name=(null) inode=14636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=38 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=39 name=(null) inode=14637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=40 name=(null) inode=14633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=41 name=(null) inode=14638 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=42 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=43 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=44 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=45 name=(null) inode=14640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=46 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=47 name=(null) inode=14641 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=48 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=49 name=(null) inode=14642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=50 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=51 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=52 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=53 name=(null) inode=14644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=55 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=56 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=57 name=(null) inode=14646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=58 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=59 name=(null) inode=14647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=60 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=61 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=62 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=63 name=(null) inode=14649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=64 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=65 name=(null) inode=14650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=66 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=67 name=(null) inode=14651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=68 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=69 name=(null) inode=14652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=70 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=71 name=(null) inode=14653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=72 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=73 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=74 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=75 name=(null) inode=14655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=76 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=77 name=(null) inode=14656 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=78 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=79 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=80 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=81 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=82 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=83 name=(null) inode=14659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=84 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=85 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=86 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=87 name=(null) inode=14661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=88 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=89 name=(null) inode=14662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=90 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=91 name=(null) inode=14663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=92 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=93 name=(null) inode=14664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=94 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=95 name=(null) inode=14665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=96 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=97 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=98 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=99 name=(null) inode=14667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=100 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=101 name=(null) inode=14668 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=102 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=103 name=(null) inode=14669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=104 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=105 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=106 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=107 name=(null) inode=14671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PATH item=109 name=(null) inode=14674 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:03.757000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:16:03.804280 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 6 00:16:03.847314 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:16:03.853352 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:16:03.979271 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:16:04.009040 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:16:04.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.011770 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:16:04.036814 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:16:04.072267 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:16:04.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.073634 systemd[1]: Reached target cryptsetup.target. Sep 6 00:16:04.076637 systemd[1]: Starting lvm2-activation.service... Sep 6 00:16:04.084329 lvm[1037]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:16:04.118173 systemd[1]: Finished lvm2-activation.service. Sep 6 00:16:04.119272 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:16:04.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.122131 systemd[1]: Mounting media-configdrive.mount... Sep 6 00:16:04.122861 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:16:04.122932 systemd[1]: Reached target machines.target. Sep 6 00:16:04.125294 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:16:04.148336 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:16:04.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.153589 kernel: ISO 9660 Extensions: RRIP_1991A Sep 6 00:16:04.156988 systemd[1]: Mounted media-configdrive.mount. Sep 6 00:16:04.157917 systemd[1]: Reached target local-fs.target. Sep 6 00:16:04.160772 systemd[1]: Starting ldconfig.service... Sep 6 00:16:04.162267 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:04.162344 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:04.168681 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:16:04.172799 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:16:04.175640 systemd[1]: Starting systemd-sysext.service... Sep 6 00:16:04.185032 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1043 (bootctl) Sep 6 00:16:04.187559 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:16:04.219594 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:16:04.237957 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:16:04.238335 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:16:04.312154 kernel: loop0: detected capacity change from 0 to 224512 Sep 6 00:16:04.320783 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:16:04.323106 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:16:04.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.357459 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:16:04.382703 kernel: loop1: detected capacity change from 0 to 224512 Sep 6 00:16:04.404479 (sd-sysext)[1053]: Using extensions 'kubernetes'. Sep 6 00:16:04.407048 (sd-sysext)[1053]: Merged extensions into '/usr'. Sep 6 00:16:04.411434 systemd-fsck[1049]: fsck.fat 4.2 (2021-01-31) Sep 6 00:16:04.411434 systemd-fsck[1049]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 00:16:04.419905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:16:04.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.423454 systemd[1]: Mounting boot.mount... Sep 6 00:16:04.443546 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:04.445729 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:16:04.448536 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:04.452449 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:04.455055 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:04.457669 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:04.458435 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:04.458622 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:04.458886 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:04.466223 systemd[1]: Mounted boot.mount. Sep 6 00:16:04.467348 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:16:04.469621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:04.469809 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:04.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.471494 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:04.471658 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:04.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.474061 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:16:04.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.476710 systemd[1]: Finished systemd-sysext.service. Sep 6 00:16:04.478535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:04.478692 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:04.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.481716 systemd[1]: Starting ensure-sysext.service... Sep 6 00:16:04.482558 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:04.486243 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:16:04.496614 systemd[1]: Reloading. Sep 6 00:16:04.528386 systemd-tmpfiles[1062]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:16:04.539482 systemd-tmpfiles[1062]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:16:04.549071 systemd-tmpfiles[1062]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:16:04.706209 /usr/lib/systemd/system-generators/torcx-generator[1081]: time="2025-09-06T00:16:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:16:04.711332 /usr/lib/systemd/system-generators/torcx-generator[1081]: time="2025-09-06T00:16:04Z" level=info msg="torcx already run" Sep 6 00:16:04.719033 ldconfig[1042]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:16:04.869570 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:16:04.869602 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:16:04.902083 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:16:04.983000 audit: BPF prog-id=27 op=LOAD Sep 6 00:16:04.984000 audit: BPF prog-id=28 op=LOAD Sep 6 00:16:04.984000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:16:04.984000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:16:04.986000 audit: BPF prog-id=29 op=LOAD Sep 6 00:16:04.986000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:16:04.986000 audit: BPF prog-id=30 op=LOAD Sep 6 00:16:04.986000 audit: BPF prog-id=31 op=LOAD Sep 6 00:16:04.986000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:16:04.986000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:16:04.987000 audit: BPF prog-id=32 op=LOAD Sep 6 00:16:04.987000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:16:04.989000 audit: BPF prog-id=33 op=LOAD Sep 6 00:16:04.989000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:16:04.989000 audit: BPF prog-id=34 op=LOAD Sep 6 00:16:04.989000 audit: BPF prog-id=35 op=LOAD Sep 6 00:16:04.989000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:16:04.989000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:16:04.992458 systemd[1]: Finished ldconfig.service. Sep 6 00:16:04.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.993409 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:16:04.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.995468 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:16:05.000888 systemd[1]: Starting audit-rules.service... Sep 6 00:16:05.003111 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:16:05.007782 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:16:05.010000 audit: BPF prog-id=36 op=LOAD Sep 6 00:16:05.015110 systemd[1]: Starting systemd-resolved.service... Sep 6 00:16:05.017000 audit: BPF prog-id=37 op=LOAD Sep 6 00:16:05.020026 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:16:05.026069 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:16:05.033000 audit[1138]: SYSTEM_BOOT pid=1138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.039629 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.044789 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:05.048965 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:05.053208 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:05.053883 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.054155 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:05.056030 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:16:05.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.057318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:05.057456 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:05.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.058706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:05.058845 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:05.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.059939 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:05.060085 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:05.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.065645 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:16:05.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.069245 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.071195 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:05.074033 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:05.079078 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:05.079706 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.079844 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:05.079989 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:16:05.081432 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:16:05.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.083873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:05.084035 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:05.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.094533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:05.094786 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:05.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.097801 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:05.098137 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:05.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.103191 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.106031 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:05.109300 systemd[1]: Starting modprobe@drm.service... Sep 6 00:16:05.114206 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:05.118859 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:05.119935 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.120277 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:05.123034 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:16:05.127272 systemd[1]: Starting systemd-update-done.service... Sep 6 00:16:05.130402 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:16:05.132481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:05.132709 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:05.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.133771 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:16:05.133911 systemd[1]: Finished modprobe@drm.service. Sep 6 00:16:05.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.135329 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:05.135519 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:05.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.139045 systemd[1]: Finished ensure-sysext.service. Sep 6 00:16:05.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.140975 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:05.145667 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:05.145938 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:05.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.146840 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.154224 systemd[1]: Finished systemd-update-done.service. Sep 6 00:16:05.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:05.157268 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:05.157320 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:05.184000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:16:05.184000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd5ba22b0 a2=420 a3=0 items=0 ppid=1129 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:16:05.184000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:16:05.185680 augenrules[1161]: No rules Sep 6 00:16:05.185797 systemd[1]: Finished audit-rules.service. Sep 6 00:16:05.217882 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:16:05.218912 systemd[1]: Reached target time-set.target. Sep 6 00:16:05.225854 systemd-resolved[1132]: Positive Trust Anchors: Sep 6 00:16:05.225895 systemd-resolved[1132]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:16:05.226026 systemd-resolved[1132]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:16:05.234365 systemd-resolved[1132]: Using system hostname 'ci-3510.3.8-n-27671cbf1d'. Sep 6 00:16:05.237451 systemd[1]: Started systemd-resolved.service. Sep 6 00:16:05.238428 systemd[1]: Reached target network.target. Sep 6 00:16:05.238974 systemd[1]: Reached target nss-lookup.target. Sep 6 00:16:05.239499 systemd[1]: Reached target sysinit.target. Sep 6 00:16:05.240137 systemd[1]: Started motdgen.path. Sep 6 00:16:05.240679 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:16:05.241685 systemd[1]: Started logrotate.timer. Sep 6 00:16:05.242398 systemd[1]: Started mdadm.timer. Sep 6 00:16:05.242893 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:16:05.243541 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:16:05.243594 systemd[1]: Reached target paths.target. Sep 6 00:16:05.244185 systemd[1]: Reached target timers.target. Sep 6 00:16:05.245538 systemd[1]: Listening on dbus.socket. Sep 6 00:16:05.248044 systemd[1]: Starting docker.socket... Sep 6 00:16:05.255635 systemd[1]: Listening on sshd.socket. Sep 6 00:16:05.256652 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:05.257782 systemd[1]: Listening on docker.socket. Sep 6 00:16:05.259043 systemd[1]: Reached target sockets.target. Sep 6 00:16:05.259779 systemd[1]: Reached target basic.target. Sep 6 00:16:05.260568 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.260642 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:16:05.262913 systemd[1]: Starting containerd.service... Sep 6 00:16:05.268009 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:16:05.271344 systemd[1]: Starting dbus.service... Sep 6 00:16:05.276217 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:16:05.283853 systemd[1]: Starting extend-filesystems.service... Sep 6 00:16:05.285858 jq[1174]: false Sep 6 00:16:05.286990 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:16:05.974881 systemd-resolved[1132]: Clock change detected. Flushing caches. Sep 6 00:16:05.975139 systemd-timesyncd[1136]: Contacted time server 129.146.193.200:123 (0.flatcar.pool.ntp.org). Sep 6 00:16:05.975254 systemd-timesyncd[1136]: Initial clock synchronization to Sat 2025-09-06 00:16:05.974743 UTC. Sep 6 00:16:05.975568 systemd[1]: Starting motdgen.service... Sep 6 00:16:05.978844 systemd[1]: Starting prepare-helm.service... Sep 6 00:16:05.984977 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:16:05.989390 systemd[1]: Starting sshd-keygen.service... Sep 6 00:16:05.999432 systemd[1]: Starting systemd-logind.service... Sep 6 00:16:06.003524 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:06.003664 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:16:06.004633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:16:06.006206 systemd[1]: Starting update-engine.service... Sep 6 00:16:06.013495 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:16:06.022888 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:16:06.023406 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:16:06.032176 jq[1190]: true Sep 6 00:16:06.038842 tar[1192]: linux-amd64/LICENSE Sep 6 00:16:06.041616 tar[1192]: linux-amd64/helm Sep 6 00:16:06.059576 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:16:06.059889 systemd[1]: Finished motdgen.service. Sep 6 00:16:06.078781 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:16:06.079124 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:16:06.100223 jq[1195]: true Sep 6 00:16:06.135682 dbus-daemon[1171]: [system] SELinux support is enabled Sep 6 00:16:06.135933 systemd[1]: Started dbus.service. Sep 6 00:16:06.139589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:16:06.139664 systemd[1]: Reached target system-config.target. Sep 6 00:16:06.142628 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:16:06.142662 systemd[1]: Reached target user-config.target. Sep 6 00:16:06.150506 systemd-networkd[1005]: eth0: Gained IPv6LL Sep 6 00:16:06.153267 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:16:06.154200 systemd[1]: Reached target network-online.target. Sep 6 00:16:06.156452 systemd[1]: Starting kubelet.service... Sep 6 00:16:06.160278 extend-filesystems[1175]: Found loop1 Sep 6 00:16:06.160278 extend-filesystems[1175]: Found vda Sep 6 00:16:06.160278 extend-filesystems[1175]: Found vda1 Sep 6 00:16:06.160278 extend-filesystems[1175]: Found vda2 Sep 6 00:16:06.167938 extend-filesystems[1175]: Found vda3 Sep 6 00:16:06.168841 extend-filesystems[1175]: Found usr Sep 6 00:16:06.168841 extend-filesystems[1175]: Found vda4 Sep 6 00:16:06.168841 extend-filesystems[1175]: Found vda6 Sep 6 00:16:06.168841 extend-filesystems[1175]: Found vda7 Sep 6 00:16:06.168841 extend-filesystems[1175]: Found vda9 Sep 6 00:16:06.238623 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 6 00:16:06.240960 update_engine[1189]: I0906 00:16:06.227212 1189 main.cc:92] Flatcar Update Engine starting Sep 6 00:16:06.252163 extend-filesystems[1175]: Checking size of /dev/vda9 Sep 6 00:16:06.252163 extend-filesystems[1175]: Resized partition /dev/vda9 Sep 6 00:16:06.243310 systemd[1]: Started update-engine.service. Sep 6 00:16:06.260469 update_engine[1189]: I0906 00:16:06.244867 1189 update_check_scheduler.cc:74] Next update check in 4m52s Sep 6 00:16:06.266473 extend-filesystems[1224]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:16:06.246960 systemd[1]: Started locksmithd.service. Sep 6 00:16:06.277827 bash[1225]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:16:06.272051 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:16:06.325833 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 6 00:16:06.341691 systemd-networkd[1005]: eth1: Gained IPv6LL Sep 6 00:16:06.347399 extend-filesystems[1224]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:16:06.347399 extend-filesystems[1224]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 6 00:16:06.347399 extend-filesystems[1224]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 6 00:16:06.352133 extend-filesystems[1175]: Resized filesystem in /dev/vda9 Sep 6 00:16:06.352133 extend-filesystems[1175]: Found vdb Sep 6 00:16:06.349342 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:16:06.349599 systemd[1]: Finished extend-filesystems.service. Sep 6 00:16:06.379682 env[1193]: time="2025-09-06T00:16:06.379568580Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:16:06.442436 systemd-logind[1186]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:16:06.448257 systemd-logind[1186]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:16:06.449155 systemd-logind[1186]: New seat seat0. Sep 6 00:16:06.457012 systemd[1]: Started systemd-logind.service. Sep 6 00:16:06.489038 coreos-metadata[1170]: Sep 06 00:16:06.488 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:16:06.507285 coreos-metadata[1170]: Sep 06 00:16:06.507 INFO Fetch successful Sep 6 00:16:06.526445 unknown[1170]: wrote ssh authorized keys file for user: core Sep 6 00:16:06.550041 env[1193]: time="2025-09-06T00:16:06.549909580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:16:06.550258 env[1193]: time="2025-09-06T00:16:06.550208634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:06.556837 update-ssh-keys[1231]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:16:06.557590 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.566671233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.566756265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.567216399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.567280092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.567302574Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.567318949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.567456575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.567827591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.568058865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:16:06.568255 env[1193]: time="2025-09-06T00:16:06.568086770Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:16:06.569130 env[1193]: time="2025-09-06T00:16:06.568171766Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:16:06.569130 env[1193]: time="2025-09-06T00:16:06.568193057Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.573735500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.573799911Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.573822762Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.573886786Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.573913220Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.573987727Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.574008762Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.574031820Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.574052470Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.574073225Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.574095256Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.574287 env[1193]: time="2025-09-06T00:16:06.574119852Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:16:06.575099 env[1193]: time="2025-09-06T00:16:06.575059851Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:16:06.575373 env[1193]: time="2025-09-06T00:16:06.575344063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:16:06.576104 env[1193]: time="2025-09-06T00:16:06.576068955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:16:06.576281 env[1193]: time="2025-09-06T00:16:06.576227113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.576464 env[1193]: time="2025-09-06T00:16:06.576422561Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:16:06.576690 env[1193]: time="2025-09-06T00:16:06.576667515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.576890 env[1193]: time="2025-09-06T00:16:06.576866242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.577005 env[1193]: time="2025-09-06T00:16:06.576982668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.577090 env[1193]: time="2025-09-06T00:16:06.577073349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.577226 env[1193]: time="2025-09-06T00:16:06.577167057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.577351 env[1193]: time="2025-09-06T00:16:06.577326619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.577463 env[1193]: time="2025-09-06T00:16:06.577440646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.577559 env[1193]: time="2025-09-06T00:16:06.577537655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.577669 env[1193]: time="2025-09-06T00:16:06.577647430Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:16:06.578054 env[1193]: time="2025-09-06T00:16:06.578025650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.578175 env[1193]: time="2025-09-06T00:16:06.578152009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.578302 env[1193]: time="2025-09-06T00:16:06.578281496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.578403 env[1193]: time="2025-09-06T00:16:06.578381909Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:16:06.578506 env[1193]: time="2025-09-06T00:16:06.578479589Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:16:06.578589 env[1193]: time="2025-09-06T00:16:06.578569071Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:16:06.578708 env[1193]: time="2025-09-06T00:16:06.578684025Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:16:06.578874 env[1193]: time="2025-09-06T00:16:06.578830449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:16:06.579474 env[1193]: time="2025-09-06T00:16:06.579373425Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:16:06.582429 env[1193]: time="2025-09-06T00:16:06.580419792Z" level=info msg="Connect containerd service" Sep 6 00:16:06.582429 env[1193]: time="2025-09-06T00:16:06.580508316Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:16:06.586287 env[1193]: time="2025-09-06T00:16:06.586209412Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:16:06.589453 env[1193]: time="2025-09-06T00:16:06.589396522Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:16:06.591559 env[1193]: time="2025-09-06T00:16:06.591518351Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:16:06.597002 systemd[1]: Started containerd.service. Sep 6 00:16:06.598935 env[1193]: time="2025-09-06T00:16:06.598607483Z" level=info msg="containerd successfully booted in 0.223961s" Sep 6 00:16:06.599721 env[1193]: time="2025-09-06T00:16:06.599661777Z" level=info msg="Start subscribing containerd event" Sep 6 00:16:06.599782 env[1193]: time="2025-09-06T00:16:06.599760873Z" level=info msg="Start recovering state" Sep 6 00:16:06.599921 env[1193]: time="2025-09-06T00:16:06.599897641Z" level=info msg="Start event monitor" Sep 6 00:16:06.599962 env[1193]: time="2025-09-06T00:16:06.599927872Z" level=info msg="Start snapshots syncer" Sep 6 00:16:06.599962 env[1193]: time="2025-09-06T00:16:06.599953287Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:16:06.600032 env[1193]: time="2025-09-06T00:16:06.599967157Z" level=info msg="Start streaming server" Sep 6 00:16:07.637868 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:16:07.651445 tar[1192]: linux-amd64/README.md Sep 6 00:16:07.667467 systemd[1]: Finished prepare-helm.service. Sep 6 00:16:08.139817 systemd[1]: Started kubelet.service. Sep 6 00:16:08.513579 sshd_keygen[1203]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:16:08.557328 systemd[1]: Finished sshd-keygen.service. Sep 6 00:16:08.561669 systemd[1]: Starting issuegen.service... Sep 6 00:16:08.576988 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:16:08.577354 systemd[1]: Finished issuegen.service. Sep 6 00:16:08.581207 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:16:08.597884 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:16:08.601862 systemd[1]: Started getty@tty1.service. Sep 6 00:16:08.605492 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:16:08.606877 systemd[1]: Reached target getty.target. Sep 6 00:16:08.607743 systemd[1]: Reached target multi-user.target. Sep 6 00:16:08.611110 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:16:08.627915 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:16:08.628217 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:16:08.629438 systemd[1]: Startup finished in 1.011s (kernel) + 7.971s (initrd) + 10.212s (userspace) = 19.195s. Sep 6 00:16:09.012872 kubelet[1242]: E0906 00:16:09.012643 1242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:16:09.015916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:16:09.016160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:16:09.016594 systemd[1]: kubelet.service: Consumed 1.782s CPU time. Sep 6 00:16:14.394924 systemd[1]: Created slice system-sshd.slice. Sep 6 00:16:14.397167 systemd[1]: Started sshd@0-146.190.126.13:22-147.75.109.163:39834.service. Sep 6 00:16:14.479731 sshd[1264]: Accepted publickey for core from 147.75.109.163 port 39834 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:14.483043 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:14.502163 systemd[1]: Created slice user-500.slice. Sep 6 00:16:14.504054 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:16:14.511410 systemd-logind[1186]: New session 1 of user core. Sep 6 00:16:14.525039 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:16:14.528749 systemd[1]: Starting user@500.service... Sep 6 00:16:14.538868 (systemd)[1267]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:14.674538 systemd[1267]: Queued start job for default target default.target. Sep 6 00:16:14.676327 systemd[1267]: Reached target paths.target. Sep 6 00:16:14.676375 systemd[1267]: Reached target sockets.target. Sep 6 00:16:14.676396 systemd[1267]: Reached target timers.target. Sep 6 00:16:14.676415 systemd[1267]: Reached target basic.target. Sep 6 00:16:14.676522 systemd[1267]: Reached target default.target. Sep 6 00:16:14.676578 systemd[1267]: Startup finished in 124ms. Sep 6 00:16:14.677058 systemd[1]: Started user@500.service. Sep 6 00:16:14.678913 systemd[1]: Started session-1.scope. Sep 6 00:16:14.746092 systemd[1]: Started sshd@1-146.190.126.13:22-147.75.109.163:39846.service. Sep 6 00:16:14.794781 sshd[1276]: Accepted publickey for core from 147.75.109.163 port 39846 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:14.796573 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:14.806433 systemd-logind[1186]: New session 2 of user core. Sep 6 00:16:14.808165 systemd[1]: Started session-2.scope. Sep 6 00:16:14.881948 sshd[1276]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:14.889803 systemd[1]: Started sshd@2-146.190.126.13:22-147.75.109.163:39848.service. Sep 6 00:16:14.890684 systemd[1]: sshd@1-146.190.126.13:22-147.75.109.163:39846.service: Deactivated successfully. Sep 6 00:16:14.892346 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:16:14.893486 systemd-logind[1186]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:16:14.895652 systemd-logind[1186]: Removed session 2. Sep 6 00:16:14.946009 sshd[1281]: Accepted publickey for core from 147.75.109.163 port 39848 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:14.948811 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:14.956417 systemd-logind[1186]: New session 3 of user core. Sep 6 00:16:14.957300 systemd[1]: Started session-3.scope. Sep 6 00:16:15.020726 sshd[1281]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:15.026807 systemd[1]: Started sshd@3-146.190.126.13:22-147.75.109.163:39860.service. Sep 6 00:16:15.028234 systemd[1]: sshd@2-146.190.126.13:22-147.75.109.163:39848.service: Deactivated successfully. Sep 6 00:16:15.029588 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:16:15.031397 systemd-logind[1186]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:16:15.033670 systemd-logind[1186]: Removed session 3. Sep 6 00:16:15.074704 sshd[1287]: Accepted publickey for core from 147.75.109.163 port 39860 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:15.077122 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:15.085390 systemd-logind[1186]: New session 4 of user core. Sep 6 00:16:15.086215 systemd[1]: Started session-4.scope. Sep 6 00:16:15.152479 sshd[1287]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:15.158228 systemd[1]: sshd@3-146.190.126.13:22-147.75.109.163:39860.service: Deactivated successfully. Sep 6 00:16:15.158935 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:16:15.160496 systemd-logind[1186]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:16:15.162680 systemd[1]: Started sshd@4-146.190.126.13:22-147.75.109.163:39874.service. Sep 6 00:16:15.164488 systemd-logind[1186]: Removed session 4. Sep 6 00:16:15.219309 sshd[1294]: Accepted publickey for core from 147.75.109.163 port 39874 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:15.221911 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:15.228496 systemd-logind[1186]: New session 5 of user core. Sep 6 00:16:15.228972 systemd[1]: Started session-5.scope. Sep 6 00:16:15.308589 sudo[1297]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:16:15.309067 sudo[1297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:16:15.358620 systemd[1]: Starting docker.service... Sep 6 00:16:15.433364 env[1307]: time="2025-09-06T00:16:15.433306386Z" level=info msg="Starting up" Sep 6 00:16:15.435755 env[1307]: time="2025-09-06T00:16:15.435709976Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:16:15.435944 env[1307]: time="2025-09-06T00:16:15.435914817Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:16:15.436089 env[1307]: time="2025-09-06T00:16:15.436058191Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:16:15.436200 env[1307]: time="2025-09-06T00:16:15.436177891Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:16:15.439852 env[1307]: time="2025-09-06T00:16:15.439814777Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:16:15.440029 env[1307]: time="2025-09-06T00:16:15.440010127Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:16:15.440110 env[1307]: time="2025-09-06T00:16:15.440091548Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:16:15.440187 env[1307]: time="2025-09-06T00:16:15.440170774Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:16:15.476978 env[1307]: time="2025-09-06T00:16:15.475538238Z" level=info msg="Loading containers: start." Sep 6 00:16:15.684526 kernel: Initializing XFRM netlink socket Sep 6 00:16:15.736587 env[1307]: time="2025-09-06T00:16:15.736099765Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:16:15.854186 systemd-networkd[1005]: docker0: Link UP Sep 6 00:16:15.874913 env[1307]: time="2025-09-06T00:16:15.874850124Z" level=info msg="Loading containers: done." Sep 6 00:16:15.893901 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2367056496-merged.mount: Deactivated successfully. Sep 6 00:16:15.900200 env[1307]: time="2025-09-06T00:16:15.900131234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:16:15.900805 env[1307]: time="2025-09-06T00:16:15.900768674Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:16:15.901382 env[1307]: time="2025-09-06T00:16:15.901348084Z" level=info msg="Daemon has completed initialization" Sep 6 00:16:15.928465 systemd[1]: Started docker.service. Sep 6 00:16:15.944532 env[1307]: time="2025-09-06T00:16:15.944061230Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:16:15.981632 systemd[1]: Starting coreos-metadata.service... Sep 6 00:16:16.059160 coreos-metadata[1423]: Sep 06 00:16:16.058 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:16:16.071601 coreos-metadata[1423]: Sep 06 00:16:16.071 INFO Fetch successful Sep 6 00:16:16.088012 systemd[1]: Finished coreos-metadata.service. Sep 6 00:16:17.292825 env[1193]: time="2025-09-06T00:16:17.292758061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 6 00:16:17.996669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619159693.mount: Deactivated successfully. Sep 6 00:16:19.230181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:16:19.230407 systemd[1]: Stopped kubelet.service. Sep 6 00:16:19.230463 systemd[1]: kubelet.service: Consumed 1.782s CPU time. Sep 6 00:16:19.232335 systemd[1]: Starting kubelet.service... Sep 6 00:16:19.377408 systemd[1]: Started kubelet.service. Sep 6 00:16:19.457926 kubelet[1445]: E0906 00:16:19.457873 1445 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:16:19.461662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:16:19.461835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:16:20.020040 env[1193]: time="2025-09-06T00:16:20.019972248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:20.022403 env[1193]: time="2025-09-06T00:16:20.022338237Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:20.024931 env[1193]: time="2025-09-06T00:16:20.024813810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:20.026744 env[1193]: time="2025-09-06T00:16:20.026697637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:20.028257 env[1193]: time="2025-09-06T00:16:20.028182491Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 6 00:16:20.029555 env[1193]: time="2025-09-06T00:16:20.029504636Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 6 00:16:22.258371 env[1193]: time="2025-09-06T00:16:22.258308212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:22.262311 env[1193]: time="2025-09-06T00:16:22.262224230Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:22.264125 env[1193]: time="2025-09-06T00:16:22.264063992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:22.267537 env[1193]: time="2025-09-06T00:16:22.267379441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:22.268346 env[1193]: time="2025-09-06T00:16:22.268290502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 6 00:16:22.270073 env[1193]: time="2025-09-06T00:16:22.270005367Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 6 00:16:24.047200 env[1193]: time="2025-09-06T00:16:24.047127621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:24.049220 env[1193]: time="2025-09-06T00:16:24.049154684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:24.051817 env[1193]: time="2025-09-06T00:16:24.051759807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:24.054477 env[1193]: time="2025-09-06T00:16:24.054424285Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:24.055732 env[1193]: time="2025-09-06T00:16:24.055689211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 6 00:16:24.056422 env[1193]: time="2025-09-06T00:16:24.056383432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 00:16:25.442293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002598230.mount: Deactivated successfully. Sep 6 00:16:26.497730 env[1193]: time="2025-09-06T00:16:26.497617005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:26.500005 env[1193]: time="2025-09-06T00:16:26.499925672Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:26.502198 env[1193]: time="2025-09-06T00:16:26.502129701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:26.504025 env[1193]: time="2025-09-06T00:16:26.503959360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:26.504623 env[1193]: time="2025-09-06T00:16:26.504579121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 6 00:16:26.505819 env[1193]: time="2025-09-06T00:16:26.505777173Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:16:27.009602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794547563.mount: Deactivated successfully. Sep 6 00:16:28.357586 env[1193]: time="2025-09-06T00:16:28.357501760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:28.359523 env[1193]: time="2025-09-06T00:16:28.359473625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:28.361404 env[1193]: time="2025-09-06T00:16:28.361361240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:28.363533 env[1193]: time="2025-09-06T00:16:28.363493130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:28.364335 env[1193]: time="2025-09-06T00:16:28.364291611Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:16:28.365307 env[1193]: time="2025-09-06T00:16:28.365236287Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:16:28.944797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111477137.mount: Deactivated successfully. Sep 6 00:16:28.951378 env[1193]: time="2025-09-06T00:16:28.951295685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:28.954284 env[1193]: time="2025-09-06T00:16:28.954199124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:28.955960 env[1193]: time="2025-09-06T00:16:28.955911222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:28.957676 env[1193]: time="2025-09-06T00:16:28.957641465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:28.958879 env[1193]: time="2025-09-06T00:16:28.958833396Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:16:28.959839 env[1193]: time="2025-09-06T00:16:28.959787035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 6 00:16:29.430524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3607156246.mount: Deactivated successfully. Sep 6 00:16:29.480406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:16:29.480707 systemd[1]: Stopped kubelet.service. Sep 6 00:16:29.483269 systemd[1]: Starting kubelet.service... Sep 6 00:16:29.676357 systemd[1]: Started kubelet.service. Sep 6 00:16:29.821566 kubelet[1455]: E0906 00:16:29.821501 1455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:16:29.824338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:16:29.824552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:16:32.665421 env[1193]: time="2025-09-06T00:16:32.665332598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:32.668458 env[1193]: time="2025-09-06T00:16:32.668400464Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:32.671062 env[1193]: time="2025-09-06T00:16:32.671012609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:32.675377 env[1193]: time="2025-09-06T00:16:32.674441547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:32.676196 env[1193]: time="2025-09-06T00:16:32.676151952Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 6 00:16:35.930288 systemd[1]: Stopped kubelet.service. Sep 6 00:16:35.932822 systemd[1]: Starting kubelet.service... Sep 6 00:16:35.973848 systemd[1]: Reloading. Sep 6 00:16:36.120019 /usr/lib/systemd/system-generators/torcx-generator[1508]: time="2025-09-06T00:16:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:16:36.123575 /usr/lib/systemd/system-generators/torcx-generator[1508]: time="2025-09-06T00:16:36Z" level=info msg="torcx already run" Sep 6 00:16:36.255136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:16:36.255170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:16:36.283885 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:16:36.410779 systemd[1]: Started kubelet.service. Sep 6 00:16:36.413306 systemd[1]: Stopping kubelet.service... Sep 6 00:16:36.414866 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:16:36.415233 systemd[1]: Stopped kubelet.service. Sep 6 00:16:36.418673 systemd[1]: Starting kubelet.service... Sep 6 00:16:36.559412 systemd[1]: Started kubelet.service. Sep 6 00:16:36.638152 kubelet[1559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:16:36.638152 kubelet[1559]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:16:36.638152 kubelet[1559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:16:36.638674 kubelet[1559]: I0906 00:16:36.638266 1559 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:16:37.216511 kubelet[1559]: I0906 00:16:37.216354 1559 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 00:16:37.216511 kubelet[1559]: I0906 00:16:37.216429 1559 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:16:37.217774 kubelet[1559]: I0906 00:16:37.217725 1559 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 00:16:37.255874 kubelet[1559]: I0906 00:16:37.255283 1559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:16:37.272365 kubelet[1559]: E0906 00:16:37.272311 1559 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.126.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:37.277733 kubelet[1559]: E0906 00:16:37.277639 1559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:16:37.277988 kubelet[1559]: I0906 00:16:37.277963 1559 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:16:37.283662 kubelet[1559]: I0906 00:16:37.283618 1559 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:16:37.284225 kubelet[1559]: I0906 00:16:37.284179 1559 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:16:37.284550 kubelet[1559]: I0906 00:16:37.284359 1559 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-27671cbf1d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:16:37.285808 kubelet[1559]: I0906 00:16:37.285758 1559 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:16:37.285997 kubelet[1559]: I0906 00:16:37.285977 1559 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 00:16:37.286351 kubelet[1559]: I0906 00:16:37.286292 1559 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:16:37.290899 kubelet[1559]: I0906 00:16:37.290849 1559 kubelet.go:446] "Attempting to sync node with API server" Sep 6 00:16:37.291202 kubelet[1559]: I0906 00:16:37.291179 1559 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:16:37.291401 kubelet[1559]: I0906 00:16:37.291385 1559 kubelet.go:352] "Adding apiserver pod source" Sep 6 00:16:37.291498 kubelet[1559]: I0906 00:16:37.291482 1559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:16:37.297616 kubelet[1559]: W0906 00:16:37.297516 1559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.126.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-27671cbf1d&limit=500&resourceVersion=0": dial tcp 146.190.126.13:6443: connect: connection refused Sep 6 00:16:37.297820 kubelet[1559]: E0906 00:16:37.297630 1559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.126.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-27671cbf1d&limit=500&resourceVersion=0\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:37.297820 kubelet[1559]: I0906 00:16:37.297751 1559 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:16:37.298410 kubelet[1559]: I0906 00:16:37.298373 1559 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:16:37.298535 kubelet[1559]: W0906 00:16:37.298469 1559 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:16:37.307736 kubelet[1559]: I0906 00:16:37.307638 1559 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:16:37.307736 kubelet[1559]: I0906 00:16:37.307738 1559 server.go:1287] "Started kubelet" Sep 6 00:16:37.322020 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:16:37.322541 kubelet[1559]: I0906 00:16:37.322497 1559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:16:37.329690 kubelet[1559]: I0906 00:16:37.329035 1559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:16:37.329690 kubelet[1559]: I0906 00:16:37.329208 1559 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:16:37.331001 kubelet[1559]: I0906 00:16:37.330961 1559 server.go:479] "Adding debug handlers to kubelet server" Sep 6 00:16:37.332971 kubelet[1559]: I0906 00:16:37.332925 1559 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:16:37.333702 kubelet[1559]: E0906 00:16:37.333660 1559 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" Sep 6 00:16:37.333976 kubelet[1559]: I0906 00:16:37.333879 1559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:16:37.334304 kubelet[1559]: I0906 00:16:37.334208 1559 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:16:37.334753 kubelet[1559]: I0906 00:16:37.334722 1559 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:16:37.335029 kubelet[1559]: I0906 00:16:37.335008 1559 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:16:37.338693 kubelet[1559]: W0906 00:16:37.338607 1559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.126.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.126.13:6443: connect: connection refused Sep 6 00:16:37.339010 kubelet[1559]: E0906 00:16:37.338966 1559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.126.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:37.342564 kubelet[1559]: E0906 00:16:37.339394 1559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-27671cbf1d?timeout=10s\": dial tcp 146.190.126.13:6443: connect: connection refused" interval="200ms" Sep 6 00:16:37.342564 kubelet[1559]: W0906 00:16:37.340948 1559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.126.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.126.13:6443: connect: connection refused Sep 6 00:16:37.342804 kubelet[1559]: E0906 00:16:37.342571 1559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.126.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:37.342804 kubelet[1559]: I0906 00:16:37.342151 1559 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:16:37.342804 kubelet[1559]: I0906 00:16:37.342687 1559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:16:37.346960 kubelet[1559]: E0906 00:16:37.345204 1559 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.126.13:6443/api/v1/namespaces/default/events\": dial tcp 146.190.126.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-27671cbf1d.18628953b90aca6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-27671cbf1d,UID:ci-3510.3.8-n-27671cbf1d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-27671cbf1d,},FirstTimestamp:2025-09-06 00:16:37.307697774 +0000 UTC m=+0.741186082,LastTimestamp:2025-09-06 00:16:37.307697774 +0000 UTC m=+0.741186082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-27671cbf1d,}" Sep 6 00:16:37.347836 kubelet[1559]: I0906 00:16:37.347303 1559 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:16:37.374274 kubelet[1559]: E0906 00:16:37.370278 1559 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:16:37.380598 kubelet[1559]: I0906 00:16:37.380534 1559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:16:37.385528 kubelet[1559]: I0906 00:16:37.385487 1559 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:16:37.385846 kubelet[1559]: I0906 00:16:37.385822 1559 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:16:37.385954 kubelet[1559]: I0906 00:16:37.385936 1559 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:16:37.388344 kubelet[1559]: I0906 00:16:37.388295 1559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:16:37.388344 kubelet[1559]: I0906 00:16:37.388352 1559 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 00:16:37.388577 kubelet[1559]: I0906 00:16:37.388383 1559 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:16:37.388577 kubelet[1559]: I0906 00:16:37.388397 1559 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 00:16:37.388577 kubelet[1559]: E0906 00:16:37.388477 1559 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:16:37.390605 kubelet[1559]: I0906 00:16:37.390574 1559 policy_none.go:49] "None policy: Start" Sep 6 00:16:37.391393 kubelet[1559]: I0906 00:16:37.391371 1559 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:16:37.391592 kubelet[1559]: I0906 00:16:37.391579 1559 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:16:37.392034 kubelet[1559]: W0906 00:16:37.391021 1559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.126.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.126.13:6443: connect: connection refused Sep 6 00:16:37.392118 kubelet[1559]: E0906 00:16:37.392044 1559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.126.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:37.398569 systemd[1]: Created slice kubepods.slice. Sep 6 00:16:37.406766 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:16:37.412161 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:16:37.418710 kubelet[1559]: I0906 00:16:37.418672 1559 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:16:37.419554 kubelet[1559]: I0906 00:16:37.419533 1559 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:16:37.420045 kubelet[1559]: I0906 00:16:37.420009 1559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:16:37.421639 kubelet[1559]: E0906 00:16:37.421378 1559 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:16:37.421733 kubelet[1559]: E0906 00:16:37.421700 1559 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-27671cbf1d\" not found" Sep 6 00:16:37.421913 kubelet[1559]: I0906 00:16:37.421894 1559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:16:37.501605 systemd[1]: Created slice kubepods-burstable-podf9328ca53a9636d6447683de6858570b.slice. Sep 6 00:16:37.519191 kubelet[1559]: E0906 00:16:37.519120 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.522342 systemd[1]: Created slice kubepods-burstable-pod35a6510e771c2b2d7f035729e857bce9.slice. Sep 6 00:16:37.524661 kubelet[1559]: I0906 00:16:37.524620 1559 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.525881 kubelet[1559]: E0906 00:16:37.525827 1559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.126.13:6443/api/v1/nodes\": dial tcp 146.190.126.13:6443: connect: connection refused" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.526208 kubelet[1559]: E0906 00:16:37.526172 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.529830 systemd[1]: Created slice kubepods-burstable-podb3f0424b8235ce872450ef91abda17d6.slice. Sep 6 00:16:37.532476 kubelet[1559]: E0906 00:16:37.532434 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.536503 kubelet[1559]: I0906 00:16:37.536407 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.536503 kubelet[1559]: I0906 00:16:37.536519 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.536838 kubelet[1559]: I0906 00:16:37.536555 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9328ca53a9636d6447683de6858570b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-27671cbf1d\" (UID: \"f9328ca53a9636d6447683de6858570b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.536838 kubelet[1559]: I0906 00:16:37.536585 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9328ca53a9636d6447683de6858570b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-27671cbf1d\" (UID: \"f9328ca53a9636d6447683de6858570b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.536838 kubelet[1559]: I0906 00:16:37.536614 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9328ca53a9636d6447683de6858570b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-27671cbf1d\" (UID: \"f9328ca53a9636d6447683de6858570b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.536838 kubelet[1559]: I0906 00:16:37.536641 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.536838 kubelet[1559]: I0906 00:16:37.536667 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.537008 kubelet[1559]: I0906 00:16:37.536747 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.537008 kubelet[1559]: I0906 00:16:37.536829 1559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3f0424b8235ce872450ef91abda17d6-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-27671cbf1d\" (UID: \"b3f0424b8235ce872450ef91abda17d6\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.543738 kubelet[1559]: E0906 00:16:37.543650 1559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-27671cbf1d?timeout=10s\": dial tcp 146.190.126.13:6443: connect: connection refused" interval="400ms" Sep 6 00:16:37.727965 kubelet[1559]: I0906 00:16:37.727912 1559 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.729490 kubelet[1559]: E0906 00:16:37.729409 1559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.126.13:6443/api/v1/nodes\": dial tcp 146.190.126.13:6443: connect: connection refused" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:37.820944 kubelet[1559]: E0906 00:16:37.820748 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:37.823441 env[1193]: time="2025-09-06T00:16:37.823372877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-27671cbf1d,Uid:f9328ca53a9636d6447683de6858570b,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:37.827730 kubelet[1559]: E0906 00:16:37.827684 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:37.829935 env[1193]: time="2025-09-06T00:16:37.829479801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-27671cbf1d,Uid:35a6510e771c2b2d7f035729e857bce9,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:37.833716 kubelet[1559]: E0906 00:16:37.833632 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:37.834524 env[1193]: time="2025-09-06T00:16:37.834466271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-27671cbf1d,Uid:b3f0424b8235ce872450ef91abda17d6,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:37.944344 kubelet[1559]: E0906 00:16:37.944276 1559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-27671cbf1d?timeout=10s\": dial tcp 146.190.126.13:6443: connect: connection refused" interval="800ms" Sep 6 00:16:38.132194 kubelet[1559]: I0906 00:16:38.132021 1559 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:38.133278 kubelet[1559]: E0906 00:16:38.133189 1559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.126.13:6443/api/v1/nodes\": dial tcp 146.190.126.13:6443: connect: connection refused" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:38.320175 kubelet[1559]: W0906 00:16:38.320096 1559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.126.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.126.13:6443: connect: connection refused Sep 6 00:16:38.320175 kubelet[1559]: E0906 00:16:38.320179 1559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.126.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:38.355842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1237047700.mount: Deactivated successfully. Sep 6 00:16:38.364531 env[1193]: time="2025-09-06T00:16:38.364455065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.368719 env[1193]: time="2025-09-06T00:16:38.368651291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.370225 env[1193]: time="2025-09-06T00:16:38.370167466Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.372604 env[1193]: time="2025-09-06T00:16:38.372518074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.374924 env[1193]: time="2025-09-06T00:16:38.374863285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.376903 env[1193]: time="2025-09-06T00:16:38.376832352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.379486 env[1193]: time="2025-09-06T00:16:38.379422601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.380164 env[1193]: time="2025-09-06T00:16:38.380124866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.385225 env[1193]: time="2025-09-06T00:16:38.385062243Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.387122 env[1193]: time="2025-09-06T00:16:38.387064787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.393502 env[1193]: time="2025-09-06T00:16:38.393412460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.396683 env[1193]: time="2025-09-06T00:16:38.396630837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.423140 kubelet[1559]: W0906 00:16:38.423067 1559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.126.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.126.13:6443: connect: connection refused Sep 6 00:16:38.423615 kubelet[1559]: E0906 00:16:38.423150 1559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.126.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:38.444830 env[1193]: time="2025-09-06T00:16:38.444687098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:38.445110 env[1193]: time="2025-09-06T00:16:38.444784852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:38.445110 env[1193]: time="2025-09-06T00:16:38.444805604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:38.445324 env[1193]: time="2025-09-06T00:16:38.445121876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13d06312b7468cdddca4899d4fae5e9b413fed3676352700fb9bd24236d7b03c pid=1598 runtime=io.containerd.runc.v2 Sep 6 00:16:38.451231 kubelet[1559]: W0906 00:16:38.451054 1559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.126.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-27671cbf1d&limit=500&resourceVersion=0": dial tcp 146.190.126.13:6443: connect: connection refused Sep 6 00:16:38.451231 kubelet[1559]: E0906 00:16:38.451176 1559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.126.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-27671cbf1d&limit=500&resourceVersion=0\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:38.456279 env[1193]: time="2025-09-06T00:16:38.456156267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:38.456473 env[1193]: time="2025-09-06T00:16:38.456282848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:38.456473 env[1193]: time="2025-09-06T00:16:38.456314000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:38.456584 env[1193]: time="2025-09-06T00:16:38.456496190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d67a7e02c038a438dbd198021d0d933e4cf72b6b1825dae6cace1ffe3aaba12 pid=1617 runtime=io.containerd.runc.v2 Sep 6 00:16:38.478402 systemd[1]: Started cri-containerd-7d67a7e02c038a438dbd198021d0d933e4cf72b6b1825dae6cace1ffe3aaba12.scope. Sep 6 00:16:38.494574 systemd[1]: Started cri-containerd-13d06312b7468cdddca4899d4fae5e9b413fed3676352700fb9bd24236d7b03c.scope. Sep 6 00:16:38.509638 env[1193]: time="2025-09-06T00:16:38.500978867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:38.509638 env[1193]: time="2025-09-06T00:16:38.501078904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:38.509638 env[1193]: time="2025-09-06T00:16:38.501098933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:38.509638 env[1193]: time="2025-09-06T00:16:38.501574215Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c925a6d3a4eb34b47235f5903ba08c11259bacca47d6f1994f861835b7d5b2d9 pid=1645 runtime=io.containerd.runc.v2 Sep 6 00:16:38.541307 systemd[1]: Started cri-containerd-c925a6d3a4eb34b47235f5903ba08c11259bacca47d6f1994f861835b7d5b2d9.scope. Sep 6 00:16:38.623713 env[1193]: time="2025-09-06T00:16:38.623654139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-27671cbf1d,Uid:b3f0424b8235ce872450ef91abda17d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"13d06312b7468cdddca4899d4fae5e9b413fed3676352700fb9bd24236d7b03c\"" Sep 6 00:16:38.628735 kubelet[1559]: E0906 00:16:38.627921 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:38.646300 env[1193]: time="2025-09-06T00:16:38.646051529Z" level=info msg="CreateContainer within sandbox \"13d06312b7468cdddca4899d4fae5e9b413fed3676352700fb9bd24236d7b03c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:16:38.653387 env[1193]: time="2025-09-06T00:16:38.653276762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-27671cbf1d,Uid:35a6510e771c2b2d7f035729e857bce9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d67a7e02c038a438dbd198021d0d933e4cf72b6b1825dae6cace1ffe3aaba12\"" Sep 6 00:16:38.654746 kubelet[1559]: E0906 00:16:38.654714 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:38.657584 env[1193]: time="2025-09-06T00:16:38.657491310Z" level=info msg="CreateContainer within sandbox \"7d67a7e02c038a438dbd198021d0d933e4cf72b6b1825dae6cace1ffe3aaba12\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:16:38.668379 env[1193]: time="2025-09-06T00:16:38.668333102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-27671cbf1d,Uid:f9328ca53a9636d6447683de6858570b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c925a6d3a4eb34b47235f5903ba08c11259bacca47d6f1994f861835b7d5b2d9\"" Sep 6 00:16:38.669953 kubelet[1559]: E0906 00:16:38.669701 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:38.673650 env[1193]: time="2025-09-06T00:16:38.673574441Z" level=info msg="CreateContainer within sandbox \"c925a6d3a4eb34b47235f5903ba08c11259bacca47d6f1994f861835b7d5b2d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:16:38.675368 env[1193]: time="2025-09-06T00:16:38.675317602Z" level=info msg="CreateContainer within sandbox \"13d06312b7468cdddca4899d4fae5e9b413fed3676352700fb9bd24236d7b03c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e5fa1845e13a37fc771894c8c46d678cb43b249c0c9c3aecc5c4b9d7d7cfa85d\"" Sep 6 00:16:38.677033 env[1193]: time="2025-09-06T00:16:38.676996033Z" level=info msg="StartContainer for \"e5fa1845e13a37fc771894c8c46d678cb43b249c0c9c3aecc5c4b9d7d7cfa85d\"" Sep 6 00:16:38.680311 env[1193]: time="2025-09-06T00:16:38.680263058Z" level=info msg="CreateContainer within sandbox \"7d67a7e02c038a438dbd198021d0d933e4cf72b6b1825dae6cace1ffe3aaba12\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef14981c3657b2f2f497340498fc8281b1eb40bff231c60aa57e639d247e48ec\"" Sep 6 00:16:38.680855 env[1193]: time="2025-09-06T00:16:38.680804482Z" level=info msg="StartContainer for \"ef14981c3657b2f2f497340498fc8281b1eb40bff231c60aa57e639d247e48ec\"" Sep 6 00:16:38.693881 kubelet[1559]: W0906 00:16:38.693795 1559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.126.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.126.13:6443: connect: connection refused Sep 6 00:16:38.693881 kubelet[1559]: E0906 00:16:38.693874 1559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.126.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:38.697029 env[1193]: time="2025-09-06T00:16:38.696969759Z" level=info msg="CreateContainer within sandbox \"c925a6d3a4eb34b47235f5903ba08c11259bacca47d6f1994f861835b7d5b2d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b79f43c146c3961646aacc3fd5b9f03749be00f3b440679a2552ad1de71bcfbd\"" Sep 6 00:16:38.697674 env[1193]: time="2025-09-06T00:16:38.697632674Z" level=info msg="StartContainer for \"b79f43c146c3961646aacc3fd5b9f03749be00f3b440679a2552ad1de71bcfbd\"" Sep 6 00:16:38.710459 systemd[1]: Started cri-containerd-e5fa1845e13a37fc771894c8c46d678cb43b249c0c9c3aecc5c4b9d7d7cfa85d.scope. Sep 6 00:16:38.743925 systemd[1]: Started cri-containerd-ef14981c3657b2f2f497340498fc8281b1eb40bff231c60aa57e639d247e48ec.scope. Sep 6 00:16:38.745361 kubelet[1559]: E0906 00:16:38.745296 1559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-27671cbf1d?timeout=10s\": dial tcp 146.190.126.13:6443: connect: connection refused" interval="1.6s" Sep 6 00:16:38.763395 systemd[1]: Started cri-containerd-b79f43c146c3961646aacc3fd5b9f03749be00f3b440679a2552ad1de71bcfbd.scope. Sep 6 00:16:38.850233 env[1193]: time="2025-09-06T00:16:38.850159232Z" level=info msg="StartContainer for \"e5fa1845e13a37fc771894c8c46d678cb43b249c0c9c3aecc5c4b9d7d7cfa85d\" returns successfully" Sep 6 00:16:38.863267 env[1193]: time="2025-09-06T00:16:38.863184825Z" level=info msg="StartContainer for \"ef14981c3657b2f2f497340498fc8281b1eb40bff231c60aa57e639d247e48ec\" returns successfully" Sep 6 00:16:38.891140 env[1193]: time="2025-09-06T00:16:38.891038899Z" level=info msg="StartContainer for \"b79f43c146c3961646aacc3fd5b9f03749be00f3b440679a2552ad1de71bcfbd\" returns successfully" Sep 6 00:16:38.935501 kubelet[1559]: I0906 00:16:38.935313 1559 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:38.935969 kubelet[1559]: E0906 00:16:38.935916 1559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.126.13:6443/api/v1/nodes\": dial tcp 146.190.126.13:6443: connect: connection refused" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:39.392533 kubelet[1559]: E0906 00:16:39.392456 1559 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.126.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.126.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:39.401609 kubelet[1559]: E0906 00:16:39.401562 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:39.401806 kubelet[1559]: E0906 00:16:39.401775 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:39.403852 kubelet[1559]: E0906 00:16:39.403530 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:39.403852 kubelet[1559]: E0906 00:16:39.403718 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:39.406281 kubelet[1559]: E0906 00:16:39.406013 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:39.406281 kubelet[1559]: E0906 00:16:39.406179 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:40.408279 kubelet[1559]: E0906 00:16:40.408215 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:40.409187 kubelet[1559]: E0906 00:16:40.409121 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:40.409816 kubelet[1559]: E0906 00:16:40.409785 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:40.410141 kubelet[1559]: E0906 00:16:40.410115 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:40.410732 kubelet[1559]: E0906 00:16:40.410702 1559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:40.411036 kubelet[1559]: E0906 00:16:40.411012 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:40.538097 kubelet[1559]: I0906 00:16:40.538057 1559 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.627301 kubelet[1559]: I0906 00:16:41.627179 1559 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.627301 kubelet[1559]: E0906 00:16:41.627276 1559 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-27671cbf1d\": node \"ci-3510.3.8-n-27671cbf1d\" not found" Sep 6 00:16:41.635581 kubelet[1559]: I0906 00:16:41.635518 1559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.647321 kubelet[1559]: I0906 00:16:41.647278 1559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.730484 kubelet[1559]: E0906 00:16:41.730409 1559 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-27671cbf1d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.730484 kubelet[1559]: I0906 00:16:41.730457 1559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.732171 kubelet[1559]: E0906 00:16:41.732062 1559 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-27671cbf1d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.732802 kubelet[1559]: E0906 00:16:41.732766 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:41.735934 kubelet[1559]: E0906 00:16:41.735871 1559 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.736224 kubelet[1559]: I0906 00:16:41.736193 1559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:41.739889 kubelet[1559]: E0906 00:16:41.739771 1559 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-27671cbf1d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:42.312317 kubelet[1559]: I0906 00:16:42.312219 1559 apiserver.go:52] "Watching apiserver" Sep 6 00:16:42.335609 kubelet[1559]: I0906 00:16:42.335539 1559 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:16:42.514865 kubelet[1559]: I0906 00:16:42.514782 1559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:42.523200 kubelet[1559]: W0906 00:16:42.523150 1559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:42.523923 kubelet[1559]: E0906 00:16:42.523891 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:43.414513 kubelet[1559]: E0906 00:16:43.414479 1559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:44.045016 systemd[1]: Reloading. Sep 6 00:16:44.160819 /usr/lib/systemd/system-generators/torcx-generator[1849]: time="2025-09-06T00:16:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:16:44.161410 /usr/lib/systemd/system-generators/torcx-generator[1849]: time="2025-09-06T00:16:44Z" level=info msg="torcx already run" Sep 6 00:16:44.321958 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:16:44.321989 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:16:44.361498 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:16:44.561015 systemd[1]: Stopping kubelet.service... Sep 6 00:16:44.583098 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:16:44.583481 systemd[1]: Stopped kubelet.service. Sep 6 00:16:44.583583 systemd[1]: kubelet.service: Consumed 1.307s CPU time. Sep 6 00:16:44.587071 systemd[1]: Starting kubelet.service... Sep 6 00:16:45.770673 systemd[1]: Started kubelet.service. Sep 6 00:16:45.902487 kubelet[1901]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:16:45.903552 kubelet[1901]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:16:45.903847 kubelet[1901]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:16:45.906200 kubelet[1901]: I0906 00:16:45.906068 1901 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:16:45.913028 sudo[1912]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:16:45.913499 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:16:45.924815 kubelet[1901]: I0906 00:16:45.924729 1901 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 00:16:45.925201 kubelet[1901]: I0906 00:16:45.925180 1901 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:16:45.926015 kubelet[1901]: I0906 00:16:45.925985 1901 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 00:16:45.928213 kubelet[1901]: I0906 00:16:45.928172 1901 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:16:45.939065 kubelet[1901]: I0906 00:16:45.939014 1901 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:16:45.960558 kubelet[1901]: E0906 00:16:45.960450 1901 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:16:45.960881 kubelet[1901]: I0906 00:16:45.960833 1901 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:16:45.967318 kubelet[1901]: I0906 00:16:45.967272 1901 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:16:45.967864 kubelet[1901]: I0906 00:16:45.967811 1901 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:16:45.968303 kubelet[1901]: I0906 00:16:45.967992 1901 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-27671cbf1d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:16:45.968561 kubelet[1901]: I0906 00:16:45.968533 1901 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:16:45.968717 kubelet[1901]: I0906 00:16:45.968695 1901 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 00:16:45.968912 kubelet[1901]: I0906 00:16:45.968893 1901 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:16:45.969234 kubelet[1901]: I0906 00:16:45.969211 1901 kubelet.go:446] "Attempting to sync node with API server" Sep 6 00:16:45.969441 kubelet[1901]: I0906 00:16:45.969418 1901 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:16:45.969576 kubelet[1901]: I0906 00:16:45.969554 1901 kubelet.go:352] "Adding apiserver pod source" Sep 6 00:16:45.969695 kubelet[1901]: I0906 00:16:45.969677 1901 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:16:45.971059 kubelet[1901]: I0906 00:16:45.970874 1901 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:16:45.973032 kubelet[1901]: I0906 00:16:45.972987 1901 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:16:45.975972 kubelet[1901]: I0906 00:16:45.975910 1901 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:16:45.975972 kubelet[1901]: I0906 00:16:45.975986 1901 server.go:1287] "Started kubelet" Sep 6 00:16:45.988781 kubelet[1901]: I0906 00:16:45.988620 1901 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:16:45.998776 kubelet[1901]: I0906 00:16:45.998668 1901 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:16:45.999799 kubelet[1901]: I0906 00:16:45.999762 1901 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:16:46.012681 kubelet[1901]: I0906 00:16:46.011834 1901 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:16:46.033990 kubelet[1901]: I0906 00:16:46.032005 1901 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:16:46.034489 kubelet[1901]: I0906 00:16:46.034414 1901 server.go:479] "Adding debug handlers to kubelet server" Sep 6 00:16:46.034863 kubelet[1901]: I0906 00:16:46.034831 1901 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:16:46.035406 kubelet[1901]: E0906 00:16:46.035297 1901 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-27671cbf1d\" not found" Sep 6 00:16:46.039390 kubelet[1901]: I0906 00:16:46.039335 1901 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:16:46.039869 kubelet[1901]: I0906 00:16:46.039841 1901 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:16:46.044231 kubelet[1901]: I0906 00:16:46.044187 1901 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:16:46.044682 kubelet[1901]: I0906 00:16:46.044644 1901 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:16:46.055835 kubelet[1901]: I0906 00:16:46.055800 1901 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:16:46.122420 kubelet[1901]: I0906 00:16:46.122342 1901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:16:46.125102 kubelet[1901]: I0906 00:16:46.125068 1901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:16:46.125350 kubelet[1901]: I0906 00:16:46.125336 1901 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 00:16:46.125457 kubelet[1901]: I0906 00:16:46.125442 1901 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:16:46.125522 kubelet[1901]: I0906 00:16:46.125510 1901 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 00:16:46.125672 kubelet[1901]: E0906 00:16:46.125628 1901 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:16:46.162060 kubelet[1901]: I0906 00:16:46.161997 1901 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:16:46.162424 kubelet[1901]: I0906 00:16:46.162398 1901 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:16:46.162607 kubelet[1901]: I0906 00:16:46.162591 1901 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:16:46.163012 kubelet[1901]: I0906 00:16:46.162976 1901 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:16:46.163177 kubelet[1901]: I0906 00:16:46.163135 1901 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:16:46.163331 kubelet[1901]: I0906 00:16:46.163314 1901 policy_none.go:49] "None policy: Start" Sep 6 00:16:46.163486 kubelet[1901]: I0906 00:16:46.163472 1901 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:16:46.163586 kubelet[1901]: I0906 00:16:46.163574 1901 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:16:46.163930 kubelet[1901]: I0906 00:16:46.163893 1901 state_mem.go:75] "Updated machine memory state" Sep 6 00:16:46.179263 kubelet[1901]: I0906 00:16:46.179199 1901 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:16:46.179745 kubelet[1901]: I0906 00:16:46.179722 1901 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:16:46.179957 kubelet[1901]: I0906 00:16:46.179900 1901 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:16:46.186160 kubelet[1901]: I0906 00:16:46.186124 1901 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:16:46.200366 kubelet[1901]: E0906 00:16:46.200322 1901 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:16:46.228044 kubelet[1901]: I0906 00:16:46.227993 1901 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.229534 kubelet[1901]: I0906 00:16:46.229498 1901 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.230918 kubelet[1901]: I0906 00:16:46.230864 1901 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.237906 kubelet[1901]: W0906 00:16:46.237841 1901 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:46.240487 kubelet[1901]: W0906 00:16:46.240100 1901 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:46.241278 kubelet[1901]: W0906 00:16:46.240149 1901 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:46.241561 kubelet[1901]: E0906 00:16:46.241532 1901 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-27671cbf1d\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.245560 kubelet[1901]: I0906 00:16:46.245501 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9328ca53a9636d6447683de6858570b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-27671cbf1d\" (UID: \"f9328ca53a9636d6447683de6858570b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.246145 kubelet[1901]: I0906 00:16:46.246095 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.246886 kubelet[1901]: I0906 00:16:46.246854 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.247065 kubelet[1901]: I0906 00:16:46.247043 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.247226 kubelet[1901]: I0906 00:16:46.247201 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3f0424b8235ce872450ef91abda17d6-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-27671cbf1d\" (UID: \"b3f0424b8235ce872450ef91abda17d6\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.247408 kubelet[1901]: I0906 00:16:46.247385 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9328ca53a9636d6447683de6858570b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-27671cbf1d\" (UID: \"f9328ca53a9636d6447683de6858570b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.247578 kubelet[1901]: I0906 00:16:46.247542 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9328ca53a9636d6447683de6858570b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-27671cbf1d\" (UID: \"f9328ca53a9636d6447683de6858570b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.247821 kubelet[1901]: I0906 00:16:46.247797 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.247979 kubelet[1901]: I0906 00:16:46.247944 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a6510e771c2b2d7f035729e857bce9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-27671cbf1d\" (UID: \"35a6510e771c2b2d7f035729e857bce9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.309050 kubelet[1901]: I0906 00:16:46.308904 1901 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.322678 kubelet[1901]: I0906 00:16:46.322113 1901 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.322678 kubelet[1901]: I0906 00:16:46.322226 1901 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:46.540844 kubelet[1901]: E0906 00:16:46.540799 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:46.541876 kubelet[1901]: E0906 00:16:46.541832 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:46.542093 kubelet[1901]: E0906 00:16:46.542047 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:46.962608 sudo[1912]: pam_unix(sudo:session): session closed for user root Sep 6 00:16:46.984671 kubelet[1901]: I0906 00:16:46.984609 1901 apiserver.go:52] "Watching apiserver" Sep 6 00:16:47.040807 kubelet[1901]: I0906 00:16:47.040742 1901 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:16:47.130041 kubelet[1901]: I0906 00:16:47.129949 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-27671cbf1d" podStartSLOduration=1.1299238329999999 podStartE2EDuration="1.129923833s" podCreationTimestamp="2025-09-06 00:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:16:47.117200358 +0000 UTC m=+1.330734631" watchObservedRunningTime="2025-09-06 00:16:47.129923833 +0000 UTC m=+1.343458095" Sep 6 00:16:47.148759 kubelet[1901]: I0906 00:16:47.148689 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-27671cbf1d" podStartSLOduration=1.148666941 podStartE2EDuration="1.148666941s" podCreationTimestamp="2025-09-06 00:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:16:47.130891719 +0000 UTC m=+1.344425995" watchObservedRunningTime="2025-09-06 00:16:47.148666941 +0000 UTC m=+1.362201243" Sep 6 00:16:47.163016 kubelet[1901]: I0906 00:16:47.162936 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" podStartSLOduration=5.162916061 podStartE2EDuration="5.162916061s" podCreationTimestamp="2025-09-06 00:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:16:47.149213893 +0000 UTC m=+1.362748169" watchObservedRunningTime="2025-09-06 00:16:47.162916061 +0000 UTC m=+1.376450344" Sep 6 00:16:47.172746 kubelet[1901]: E0906 00:16:47.172699 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:47.173500 kubelet[1901]: E0906 00:16:47.173467 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:47.174286 kubelet[1901]: I0906 00:16:47.174165 1901 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:47.186900 kubelet[1901]: W0906 00:16:47.186848 1901 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:47.187130 kubelet[1901]: E0906 00:16:47.186977 1901 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-27671cbf1d\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-27671cbf1d" Sep 6 00:16:47.187308 kubelet[1901]: E0906 00:16:47.187283 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:48.174946 kubelet[1901]: E0906 00:16:48.174902 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:48.176548 kubelet[1901]: E0906 00:16:48.176041 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:48.462212 kubelet[1901]: E0906 00:16:48.462062 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:49.073301 sudo[1297]: pam_unix(sudo:session): session closed for user root Sep 6 00:16:49.078926 sshd[1294]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:49.083188 systemd[1]: sshd@4-146.190.126.13:22-147.75.109.163:39874.service: Deactivated successfully. Sep 6 00:16:49.084159 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:16:49.084410 systemd[1]: session-5.scope: Consumed 6.013s CPU time. Sep 6 00:16:49.085161 systemd-logind[1186]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:16:49.086704 systemd-logind[1186]: Removed session 5. Sep 6 00:16:49.177184 kubelet[1901]: E0906 00:16:49.177141 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:49.323050 kubelet[1901]: E0906 00:16:49.322999 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:49.536519 kubelet[1901]: I0906 00:16:49.536475 1901 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:16:49.537234 env[1193]: time="2025-09-06T00:16:49.537188499Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:16:49.538223 kubelet[1901]: I0906 00:16:49.538197 1901 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:16:50.179030 kubelet[1901]: E0906 00:16:50.178980 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:50.587833 systemd[1]: Created slice kubepods-besteffort-podb2f9e359_4a50_45ca_8481_24f67c694c58.slice. Sep 6 00:16:50.621281 systemd[1]: Created slice kubepods-burstable-podb12cbd82_a2f4_49a2_90f6_a2132dc55fbc.slice. Sep 6 00:16:50.687506 kubelet[1901]: I0906 00:16:50.687470 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-clustermesh-secrets\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.687812 kubelet[1901]: I0906 00:16:50.687790 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-host-proc-sys-kernel\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.687971 kubelet[1901]: I0906 00:16:50.687948 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2f9e359-4a50-45ca-8481-24f67c694c58-xtables-lock\") pod \"kube-proxy-6j95c\" (UID: \"b2f9e359-4a50-45ca-8481-24f67c694c58\") " pod="kube-system/kube-proxy-6j95c" Sep 6 00:16:50.688670 kubelet[1901]: I0906 00:16:50.688626 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-bpf-maps\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.688896 kubelet[1901]: I0906 00:16:50.688873 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-hostproc\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.689052 kubelet[1901]: I0906 00:16:50.689034 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-etc-cni-netd\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.689179 kubelet[1901]: I0906 00:16:50.689159 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-host-proc-sys-net\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.689358 kubelet[1901]: I0906 00:16:50.689320 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqhm\" (UniqueName: \"kubernetes.io/projected/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-kube-api-access-ksqhm\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.689460 systemd[1]: Created slice kubepods-besteffort-podc7d4fb1f_cf2e_482a_9415_1a469c1c52ab.slice. Sep 6 00:16:50.689931 kubelet[1901]: I0906 00:16:50.689911 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-cgroup\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.690030 kubelet[1901]: I0906 00:16:50.690013 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-config-path\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.690158 kubelet[1901]: I0906 00:16:50.690136 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnpjx\" (UniqueName: \"kubernetes.io/projected/b2f9e359-4a50-45ca-8481-24f67c694c58-kube-api-access-wnpjx\") pod \"kube-proxy-6j95c\" (UID: \"b2f9e359-4a50-45ca-8481-24f67c694c58\") " pod="kube-system/kube-proxy-6j95c" Sep 6 00:16:50.690290 kubelet[1901]: I0906 00:16:50.690268 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cni-path\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.690440 kubelet[1901]: I0906 00:16:50.690413 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-lib-modules\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.690579 kubelet[1901]: I0906 00:16:50.690561 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-xtables-lock\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.690713 kubelet[1901]: I0906 00:16:50.690694 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2f9e359-4a50-45ca-8481-24f67c694c58-lib-modules\") pod \"kube-proxy-6j95c\" (UID: \"b2f9e359-4a50-45ca-8481-24f67c694c58\") " pod="kube-system/kube-proxy-6j95c" Sep 6 00:16:50.690828 kubelet[1901]: I0906 00:16:50.690808 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-run\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.690934 kubelet[1901]: I0906 00:16:50.690914 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-hubble-tls\") pod \"cilium-9jrzd\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " pod="kube-system/cilium-9jrzd" Sep 6 00:16:50.691027 kubelet[1901]: I0906 00:16:50.691011 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2f9e359-4a50-45ca-8481-24f67c694c58-kube-proxy\") pod \"kube-proxy-6j95c\" (UID: \"b2f9e359-4a50-45ca-8481-24f67c694c58\") " pod="kube-system/kube-proxy-6j95c" Sep 6 00:16:50.696204 kubelet[1901]: I0906 00:16:50.696146 1901 status_manager.go:890] "Failed to get status for pod" podUID="c7d4fb1f-cf2e-482a-9415-1a469c1c52ab" pod="kube-system/cilium-operator-6c4d7847fc-fxbcq" err="pods \"cilium-operator-6c4d7847fc-fxbcq\" is forbidden: User \"system:node:ci-3510.3.8-n-27671cbf1d\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-27671cbf1d' and this object" Sep 6 00:16:50.792233 kubelet[1901]: I0906 00:16:50.792018 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fxbcq\" (UID: \"c7d4fb1f-cf2e-482a-9415-1a469c1c52ab\") " pod="kube-system/cilium-operator-6c4d7847fc-fxbcq" Sep 6 00:16:50.792441 kubelet[1901]: I0906 00:16:50.792277 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxmc\" (UniqueName: \"kubernetes.io/projected/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab-kube-api-access-wgxmc\") pod \"cilium-operator-6c4d7847fc-fxbcq\" (UID: \"c7d4fb1f-cf2e-482a-9415-1a469c1c52ab\") " pod="kube-system/cilium-operator-6c4d7847fc-fxbcq" Sep 6 00:16:50.793690 kubelet[1901]: I0906 00:16:50.793655 1901 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:16:50.897594 kubelet[1901]: E0906 00:16:50.897438 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:50.905074 env[1193]: time="2025-09-06T00:16:50.904562819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6j95c,Uid:b2f9e359-4a50-45ca-8481-24f67c694c58,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:50.925655 kubelet[1901]: E0906 00:16:50.925597 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:50.928613 env[1193]: time="2025-09-06T00:16:50.928532036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jrzd,Uid:b12cbd82-a2f4-49a2-90f6-a2132dc55fbc,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:50.934510 env[1193]: time="2025-09-06T00:16:50.934382713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:50.934510 env[1193]: time="2025-09-06T00:16:50.934452770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:50.934745 env[1193]: time="2025-09-06T00:16:50.934500412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:50.935343 env[1193]: time="2025-09-06T00:16:50.935252674Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b20ac64eb6ddd71c72c926643a0798ecbb7620b121d3797852579e3000cddd5f pid=1984 runtime=io.containerd.runc.v2 Sep 6 00:16:50.952544 env[1193]: time="2025-09-06T00:16:50.952165731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:50.953318 env[1193]: time="2025-09-06T00:16:50.952745519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:50.953318 env[1193]: time="2025-09-06T00:16:50.952790712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:50.953318 env[1193]: time="2025-09-06T00:16:50.952994262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b pid=2009 runtime=io.containerd.runc.v2 Sep 6 00:16:50.970265 systemd[1]: Started cri-containerd-b20ac64eb6ddd71c72c926643a0798ecbb7620b121d3797852579e3000cddd5f.scope. Sep 6 00:16:50.995841 kubelet[1901]: E0906 00:16:50.995386 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:50.995600 systemd[1]: Started cri-containerd-744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b.scope. Sep 6 00:16:50.997490 env[1193]: time="2025-09-06T00:16:50.997448057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fxbcq,Uid:c7d4fb1f-cf2e-482a-9415-1a469c1c52ab,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:51.041621 env[1193]: time="2025-09-06T00:16:51.041501776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:51.041853 env[1193]: time="2025-09-06T00:16:51.041803841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:51.041986 env[1193]: time="2025-09-06T00:16:51.041960061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:51.042352 env[1193]: time="2025-09-06T00:16:51.042301667Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab pid=2055 runtime=io.containerd.runc.v2 Sep 6 00:16:51.049123 env[1193]: time="2025-09-06T00:16:51.049045236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6j95c,Uid:b2f9e359-4a50-45ca-8481-24f67c694c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"b20ac64eb6ddd71c72c926643a0798ecbb7620b121d3797852579e3000cddd5f\"" Sep 6 00:16:51.052906 kubelet[1901]: E0906 00:16:51.051604 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:51.066271 env[1193]: time="2025-09-06T00:16:51.066193830Z" level=info msg="CreateContainer within sandbox \"b20ac64eb6ddd71c72c926643a0798ecbb7620b121d3797852579e3000cddd5f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:16:51.089368 env[1193]: time="2025-09-06T00:16:51.087275427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jrzd,Uid:b12cbd82-a2f4-49a2-90f6-a2132dc55fbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\"" Sep 6 00:16:51.087420 systemd[1]: Started cri-containerd-f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab.scope. Sep 6 00:16:51.093460 kubelet[1901]: E0906 00:16:51.093310 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:51.100922 env[1193]: time="2025-09-06T00:16:51.100795724Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:16:51.112349 env[1193]: time="2025-09-06T00:16:51.112261843Z" level=info msg="CreateContainer within sandbox \"b20ac64eb6ddd71c72c926643a0798ecbb7620b121d3797852579e3000cddd5f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a11f27109cdb2ee367fd05cbf77776ea45ac6dd8f294a89742f74f7017b0e6e5\"" Sep 6 00:16:51.113877 env[1193]: time="2025-09-06T00:16:51.113706371Z" level=info msg="StartContainer for \"a11f27109cdb2ee367fd05cbf77776ea45ac6dd8f294a89742f74f7017b0e6e5\"" Sep 6 00:16:51.151053 systemd[1]: Started cri-containerd-a11f27109cdb2ee367fd05cbf77776ea45ac6dd8f294a89742f74f7017b0e6e5.scope. Sep 6 00:16:51.210657 env[1193]: time="2025-09-06T00:16:51.210592202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fxbcq,Uid:c7d4fb1f-cf2e-482a-9415-1a469c1c52ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\"" Sep 6 00:16:51.213941 kubelet[1901]: E0906 00:16:51.212220 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:51.236702 env[1193]: time="2025-09-06T00:16:51.236631315Z" level=info msg="StartContainer for \"a11f27109cdb2ee367fd05cbf77776ea45ac6dd8f294a89742f74f7017b0e6e5\" returns successfully" Sep 6 00:16:51.435192 update_engine[1189]: I0906 00:16:51.435012 1189 update_attempter.cc:509] Updating boot flags... Sep 6 00:16:52.193470 kubelet[1901]: E0906 00:16:52.193433 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:53.202145 kubelet[1901]: E0906 00:16:53.201562 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:56.882467 kubelet[1901]: E0906 00:16:56.882331 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:56.910745 kubelet[1901]: I0906 00:16:56.910677 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6j95c" podStartSLOduration=6.910646902 podStartE2EDuration="6.910646902s" podCreationTimestamp="2025-09-06 00:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:16:52.230902569 +0000 UTC m=+6.444436854" watchObservedRunningTime="2025-09-06 00:16:56.910646902 +0000 UTC m=+11.124181175" Sep 6 00:16:57.216672 kubelet[1901]: E0906 00:16:57.214322 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:16:58.484747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791575511.mount: Deactivated successfully. Sep 6 00:17:02.919734 env[1193]: time="2025-09-06T00:17:02.919624389Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:02.921791 env[1193]: time="2025-09-06T00:17:02.921732018Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:02.923694 env[1193]: time="2025-09-06T00:17:02.923623692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:02.924867 env[1193]: time="2025-09-06T00:17:02.924798958Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:17:02.935747 env[1193]: time="2025-09-06T00:17:02.935667714Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:17:02.936489 env[1193]: time="2025-09-06T00:17:02.936418933Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:17:02.957559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579964349.mount: Deactivated successfully. Sep 6 00:17:02.968969 env[1193]: time="2025-09-06T00:17:02.968824035Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\"" Sep 6 00:17:02.971488 env[1193]: time="2025-09-06T00:17:02.971373381Z" level=info msg="StartContainer for \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\"" Sep 6 00:17:03.032339 systemd[1]: Started cri-containerd-5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2.scope. Sep 6 00:17:03.089832 env[1193]: time="2025-09-06T00:17:03.089773597Z" level=info msg="StartContainer for \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\" returns successfully" Sep 6 00:17:03.104170 systemd[1]: cri-containerd-5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2.scope: Deactivated successfully. Sep 6 00:17:03.136662 env[1193]: time="2025-09-06T00:17:03.136586917Z" level=info msg="shim disconnected" id=5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2 Sep 6 00:17:03.136662 env[1193]: time="2025-09-06T00:17:03.136658165Z" level=warning msg="cleaning up after shim disconnected" id=5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2 namespace=k8s.io Sep 6 00:17:03.136662 env[1193]: time="2025-09-06T00:17:03.136673348Z" level=info msg="cleaning up dead shim" Sep 6 00:17:03.148960 env[1193]: time="2025-09-06T00:17:03.148843420Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2331 runtime=io.containerd.runc.v2\n" Sep 6 00:17:03.233632 kubelet[1901]: E0906 00:17:03.233580 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:03.246291 env[1193]: time="2025-09-06T00:17:03.242649290Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:17:03.267424 env[1193]: time="2025-09-06T00:17:03.267353594Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\"" Sep 6 00:17:03.268410 env[1193]: time="2025-09-06T00:17:03.268357277Z" level=info msg="StartContainer for \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\"" Sep 6 00:17:03.311858 systemd[1]: Started cri-containerd-9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68.scope. Sep 6 00:17:03.361973 env[1193]: time="2025-09-06T00:17:03.360477397Z" level=info msg="StartContainer for \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\" returns successfully" Sep 6 00:17:03.376512 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:17:03.377464 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:17:03.377969 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:17:03.380743 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:17:03.385625 systemd[1]: cri-containerd-9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68.scope: Deactivated successfully. Sep 6 00:17:03.402885 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:17:03.436768 env[1193]: time="2025-09-06T00:17:03.436648453Z" level=info msg="shim disconnected" id=9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68 Sep 6 00:17:03.437215 env[1193]: time="2025-09-06T00:17:03.436804179Z" level=warning msg="cleaning up after shim disconnected" id=9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68 namespace=k8s.io Sep 6 00:17:03.437215 env[1193]: time="2025-09-06T00:17:03.436825152Z" level=info msg="cleaning up dead shim" Sep 6 00:17:03.449895 env[1193]: time="2025-09-06T00:17:03.449828298Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2399 runtime=io.containerd.runc.v2\n" Sep 6 00:17:03.952977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2-rootfs.mount: Deactivated successfully. Sep 6 00:17:04.238839 kubelet[1901]: E0906 00:17:04.238083 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:04.241999 env[1193]: time="2025-09-06T00:17:04.241932861Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:17:04.296938 env[1193]: time="2025-09-06T00:17:04.296873040Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\"" Sep 6 00:17:04.298023 env[1193]: time="2025-09-06T00:17:04.297904238Z" level=info msg="StartContainer for \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\"" Sep 6 00:17:04.337055 systemd[1]: Started cri-containerd-84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494.scope. Sep 6 00:17:04.389637 systemd[1]: cri-containerd-84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494.scope: Deactivated successfully. Sep 6 00:17:04.396006 env[1193]: time="2025-09-06T00:17:04.395921265Z" level=info msg="StartContainer for \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\" returns successfully" Sep 6 00:17:04.464955 env[1193]: time="2025-09-06T00:17:04.464752316Z" level=info msg="shim disconnected" id=84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494 Sep 6 00:17:04.464955 env[1193]: time="2025-09-06T00:17:04.464927283Z" level=warning msg="cleaning up after shim disconnected" id=84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494 namespace=k8s.io Sep 6 00:17:04.464955 env[1193]: time="2025-09-06T00:17:04.464943680Z" level=info msg="cleaning up dead shim" Sep 6 00:17:04.484373 env[1193]: time="2025-09-06T00:17:04.484319737Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2456 runtime=io.containerd.runc.v2\n" Sep 6 00:17:04.951746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494-rootfs.mount: Deactivated successfully. Sep 6 00:17:05.255590 kubelet[1901]: E0906 00:17:05.247591 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:05.275667 env[1193]: time="2025-09-06T00:17:05.275618875Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:17:05.300147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030723639.mount: Deactivated successfully. Sep 6 00:17:05.315197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187354414.mount: Deactivated successfully. Sep 6 00:17:05.323340 env[1193]: time="2025-09-06T00:17:05.323283255Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\"" Sep 6 00:17:05.326539 env[1193]: time="2025-09-06T00:17:05.326323200Z" level=info msg="StartContainer for \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\"" Sep 6 00:17:05.363353 systemd[1]: Started cri-containerd-d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9.scope. Sep 6 00:17:05.412572 systemd[1]: cri-containerd-d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9.scope: Deactivated successfully. Sep 6 00:17:05.418221 env[1193]: time="2025-09-06T00:17:05.417962811Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb12cbd82_a2f4_49a2_90f6_a2132dc55fbc.slice/cri-containerd-d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9.scope/memory.events\": no such file or directory" Sep 6 00:17:05.422921 env[1193]: time="2025-09-06T00:17:05.422835796Z" level=info msg="StartContainer for \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\" returns successfully" Sep 6 00:17:05.479460 env[1193]: time="2025-09-06T00:17:05.479402524Z" level=info msg="shim disconnected" id=d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9 Sep 6 00:17:05.479882 env[1193]: time="2025-09-06T00:17:05.479849956Z" level=warning msg="cleaning up after shim disconnected" id=d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9 namespace=k8s.io Sep 6 00:17:05.480019 env[1193]: time="2025-09-06T00:17:05.479998044Z" level=info msg="cleaning up dead shim" Sep 6 00:17:05.505195 env[1193]: time="2025-09-06T00:17:05.505131670Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513 runtime=io.containerd.runc.v2\n" Sep 6 00:17:05.918950 env[1193]: time="2025-09-06T00:17:05.918856684Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:05.920778 env[1193]: time="2025-09-06T00:17:05.920714375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:05.926032 env[1193]: time="2025-09-06T00:17:05.925974322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:05.927127 env[1193]: time="2025-09-06T00:17:05.927052702Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:17:05.934349 env[1193]: time="2025-09-06T00:17:05.934260354Z" level=info msg="CreateContainer within sandbox \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:17:05.961284 env[1193]: time="2025-09-06T00:17:05.961160955Z" level=info msg="CreateContainer within sandbox \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\"" Sep 6 00:17:05.962424 env[1193]: time="2025-09-06T00:17:05.962384507Z" level=info msg="StartContainer for \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\"" Sep 6 00:17:05.995425 systemd[1]: Started cri-containerd-2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca.scope. Sep 6 00:17:06.060360 env[1193]: time="2025-09-06T00:17:06.060185293Z" level=info msg="StartContainer for \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\" returns successfully" Sep 6 00:17:06.252150 kubelet[1901]: E0906 00:17:06.252103 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:06.280411 kubelet[1901]: E0906 00:17:06.280343 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:06.287524 env[1193]: time="2025-09-06T00:17:06.287459094Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:17:06.329785 env[1193]: time="2025-09-06T00:17:06.329728858Z" level=info msg="CreateContainer within sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\"" Sep 6 00:17:06.331327 env[1193]: time="2025-09-06T00:17:06.331275875Z" level=info msg="StartContainer for \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\"" Sep 6 00:17:06.410971 systemd[1]: Started cri-containerd-7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492.scope. Sep 6 00:17:06.451661 kubelet[1901]: I0906 00:17:06.451591 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fxbcq" podStartSLOduration=1.7439242940000002 podStartE2EDuration="16.45145847s" podCreationTimestamp="2025-09-06 00:16:50 +0000 UTC" firstStartedPulling="2025-09-06 00:16:51.222231204 +0000 UTC m=+5.435765456" lastFinishedPulling="2025-09-06 00:17:05.929765365 +0000 UTC m=+20.143299632" observedRunningTime="2025-09-06 00:17:06.304350344 +0000 UTC m=+20.517884629" watchObservedRunningTime="2025-09-06 00:17:06.45145847 +0000 UTC m=+20.664992742" Sep 6 00:17:06.520222 env[1193]: time="2025-09-06T00:17:06.520012523Z" level=info msg="StartContainer for \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\" returns successfully" Sep 6 00:17:06.876492 kubelet[1901]: I0906 00:17:06.874977 1901 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:17:07.155402 systemd[1]: Created slice kubepods-burstable-pod85997cc3_3a99_4c56_9013_fe6c3001c54c.slice. Sep 6 00:17:07.167418 systemd[1]: Created slice kubepods-burstable-podc2e76449_0264_4454_9cd2_92f2c81c5882.slice. Sep 6 00:17:07.177468 kubelet[1901]: W0906 00:17:07.177400 1901 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.8-n-27671cbf1d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-27671cbf1d' and this object Sep 6 00:17:07.177838 kubelet[1901]: I0906 00:17:07.177414 1901 status_manager.go:890] "Failed to get status for pod" podUID="85997cc3-3a99-4c56-9013-fe6c3001c54c" pod="kube-system/coredns-668d6bf9bc-8sb7p" err="pods \"coredns-668d6bf9bc-8sb7p\" is forbidden: User \"system:node:ci-3510.3.8-n-27671cbf1d\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-27671cbf1d' and this object" Sep 6 00:17:07.178337 kubelet[1901]: E0906 00:17:07.178301 1901 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510.3.8-n-27671cbf1d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-27671cbf1d' and this object" logger="UnhandledError" Sep 6 00:17:07.225013 kubelet[1901]: I0906 00:17:07.224923 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdp7f\" (UniqueName: \"kubernetes.io/projected/c2e76449-0264-4454-9cd2-92f2c81c5882-kube-api-access-zdp7f\") pod \"coredns-668d6bf9bc-p7nzb\" (UID: \"c2e76449-0264-4454-9cd2-92f2c81c5882\") " pod="kube-system/coredns-668d6bf9bc-p7nzb" Sep 6 00:17:07.225013 kubelet[1901]: I0906 00:17:07.225004 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85997cc3-3a99-4c56-9013-fe6c3001c54c-config-volume\") pod \"coredns-668d6bf9bc-8sb7p\" (UID: \"85997cc3-3a99-4c56-9013-fe6c3001c54c\") " pod="kube-system/coredns-668d6bf9bc-8sb7p" Sep 6 00:17:07.225372 kubelet[1901]: I0906 00:17:07.225053 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2e76449-0264-4454-9cd2-92f2c81c5882-config-volume\") pod \"coredns-668d6bf9bc-p7nzb\" (UID: \"c2e76449-0264-4454-9cd2-92f2c81c5882\") " pod="kube-system/coredns-668d6bf9bc-p7nzb" Sep 6 00:17:07.225372 kubelet[1901]: I0906 00:17:07.225119 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p7r\" (UniqueName: \"kubernetes.io/projected/85997cc3-3a99-4c56-9013-fe6c3001c54c-kube-api-access-72p7r\") pod \"coredns-668d6bf9bc-8sb7p\" (UID: \"85997cc3-3a99-4c56-9013-fe6c3001c54c\") " pod="kube-system/coredns-668d6bf9bc-8sb7p" Sep 6 00:17:07.291609 kubelet[1901]: E0906 00:17:07.291571 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:07.293084 kubelet[1901]: E0906 00:17:07.292272 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:08.293949 kubelet[1901]: E0906 00:17:08.293914 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:08.327140 kubelet[1901]: E0906 00:17:08.327063 1901 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:17:08.327746 kubelet[1901]: E0906 00:17:08.327298 1901 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2e76449-0264-4454-9cd2-92f2c81c5882-config-volume podName:c2e76449-0264-4454-9cd2-92f2c81c5882 nodeName:}" failed. No retries permitted until 2025-09-06 00:17:08.827226571 +0000 UTC m=+23.040760855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c2e76449-0264-4454-9cd2-92f2c81c5882-config-volume") pod "coredns-668d6bf9bc-p7nzb" (UID: "c2e76449-0264-4454-9cd2-92f2c81c5882") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:17:08.327746 kubelet[1901]: E0906 00:17:08.327718 1901 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:17:08.327944 kubelet[1901]: E0906 00:17:08.327785 1901 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85997cc3-3a99-4c56-9013-fe6c3001c54c-config-volume podName:85997cc3-3a99-4c56-9013-fe6c3001c54c nodeName:}" failed. No retries permitted until 2025-09-06 00:17:08.827765206 +0000 UTC m=+23.041299476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/85997cc3-3a99-4c56-9013-fe6c3001c54c-config-volume") pod "coredns-668d6bf9bc-8sb7p" (UID: "85997cc3-3a99-4c56-9013-fe6c3001c54c") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:17:08.963000 kubelet[1901]: E0906 00:17:08.962918 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:08.964360 env[1193]: time="2025-09-06T00:17:08.964274823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sb7p,Uid:85997cc3-3a99-4c56-9013-fe6c3001c54c,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:08.973114 kubelet[1901]: E0906 00:17:08.973039 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:08.973860 env[1193]: time="2025-09-06T00:17:08.973800059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p7nzb,Uid:c2e76449-0264-4454-9cd2-92f2c81c5882,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:09.296117 kubelet[1901]: E0906 00:17:09.296013 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:09.856474 systemd-networkd[1005]: cilium_host: Link UP Sep 6 00:17:09.856719 systemd-networkd[1005]: cilium_net: Link UP Sep 6 00:17:09.856726 systemd-networkd[1005]: cilium_net: Gained carrier Sep 6 00:17:09.857026 systemd-networkd[1005]: cilium_host: Gained carrier Sep 6 00:17:09.859357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:17:09.864428 systemd-networkd[1005]: cilium_host: Gained IPv6LL Sep 6 00:17:10.063986 systemd-networkd[1005]: cilium_vxlan: Link UP Sep 6 00:17:10.063997 systemd-networkd[1005]: cilium_vxlan: Gained carrier Sep 6 00:17:10.406641 systemd-networkd[1005]: cilium_net: Gained IPv6LL Sep 6 00:17:10.495292 kernel: NET: Registered PF_ALG protocol family Sep 6 00:17:11.237528 systemd-networkd[1005]: cilium_vxlan: Gained IPv6LL Sep 6 00:17:11.657360 systemd-networkd[1005]: lxc_health: Link UP Sep 6 00:17:11.681526 systemd-networkd[1005]: lxc_health: Gained carrier Sep 6 00:17:11.682388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:17:12.045767 systemd-networkd[1005]: lxce7f24a52a173: Link UP Sep 6 00:17:12.053287 kernel: eth0: renamed from tmpca39f Sep 6 00:17:12.065404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce7f24a52a173: link becomes ready Sep 6 00:17:12.062555 systemd-networkd[1005]: lxce7f24a52a173: Gained carrier Sep 6 00:17:12.093338 systemd-networkd[1005]: lxc45b695b21e05: Link UP Sep 6 00:17:12.108268 kernel: eth0: renamed from tmp1823c Sep 6 00:17:12.115443 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc45b695b21e05: link becomes ready Sep 6 00:17:12.115062 systemd-networkd[1005]: lxc45b695b21e05: Gained carrier Sep 6 00:17:12.930713 kubelet[1901]: E0906 00:17:12.930672 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:12.965273 kubelet[1901]: I0906 00:17:12.965096 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9jrzd" podStartSLOduration=11.13693702 podStartE2EDuration="22.965070546s" podCreationTimestamp="2025-09-06 00:16:50 +0000 UTC" firstStartedPulling="2025-09-06 00:16:51.098975667 +0000 UTC m=+5.312509932" lastFinishedPulling="2025-09-06 00:17:02.927109186 +0000 UTC m=+17.140643458" observedRunningTime="2025-09-06 00:17:07.441502787 +0000 UTC m=+21.655037070" watchObservedRunningTime="2025-09-06 00:17:12.965070546 +0000 UTC m=+27.178604819" Sep 6 00:17:13.303993 kubelet[1901]: E0906 00:17:13.303933 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:13.669578 systemd-networkd[1005]: lxc_health: Gained IPv6LL Sep 6 00:17:13.670114 systemd-networkd[1005]: lxc45b695b21e05: Gained IPv6LL Sep 6 00:17:13.733600 systemd-networkd[1005]: lxce7f24a52a173: Gained IPv6LL Sep 6 00:17:14.306492 kubelet[1901]: E0906 00:17:14.306433 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:18.422356 env[1193]: time="2025-09-06T00:17:18.420993491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:18.422356 env[1193]: time="2025-09-06T00:17:18.421076794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:18.422356 env[1193]: time="2025-09-06T00:17:18.421096248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:18.422356 env[1193]: time="2025-09-06T00:17:18.421395986Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1823c0ff8d32308d47c40fde03cf33cb1462b11781d636388850b86ad76d6eb4 pid=3103 runtime=io.containerd.runc.v2 Sep 6 00:17:18.457102 systemd[1]: run-containerd-runc-k8s.io-1823c0ff8d32308d47c40fde03cf33cb1462b11781d636388850b86ad76d6eb4-runc.yQCI3j.mount: Deactivated successfully. Sep 6 00:17:18.463571 systemd[1]: Started cri-containerd-1823c0ff8d32308d47c40fde03cf33cb1462b11781d636388850b86ad76d6eb4.scope. Sep 6 00:17:18.489879 env[1193]: time="2025-09-06T00:17:18.489729480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:18.489879 env[1193]: time="2025-09-06T00:17:18.489811171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:18.490224 env[1193]: time="2025-09-06T00:17:18.490155796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:18.490875 env[1193]: time="2025-09-06T00:17:18.490778033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca39f6c042cc0c2c074bfcbff87e0087aa28109e118e1969901e7cd3a3ec63c6 pid=3136 runtime=io.containerd.runc.v2 Sep 6 00:17:18.527769 systemd[1]: Started cri-containerd-ca39f6c042cc0c2c074bfcbff87e0087aa28109e118e1969901e7cd3a3ec63c6.scope. Sep 6 00:17:18.587177 env[1193]: time="2025-09-06T00:17:18.587088506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p7nzb,Uid:c2e76449-0264-4454-9cd2-92f2c81c5882,Namespace:kube-system,Attempt:0,} returns sandbox id \"1823c0ff8d32308d47c40fde03cf33cb1462b11781d636388850b86ad76d6eb4\"" Sep 6 00:17:18.588110 kubelet[1901]: E0906 00:17:18.588050 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:18.593519 env[1193]: time="2025-09-06T00:17:18.593451552Z" level=info msg="CreateContainer within sandbox \"1823c0ff8d32308d47c40fde03cf33cb1462b11781d636388850b86ad76d6eb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:17:18.619046 env[1193]: time="2025-09-06T00:17:18.618879726Z" level=info msg="CreateContainer within sandbox \"1823c0ff8d32308d47c40fde03cf33cb1462b11781d636388850b86ad76d6eb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d419e37e81166f9a6da14fe7f530503369192012f069514adbbc8a6f5e34a64\"" Sep 6 00:17:18.620824 env[1193]: time="2025-09-06T00:17:18.620776278Z" level=info msg="StartContainer for \"2d419e37e81166f9a6da14fe7f530503369192012f069514adbbc8a6f5e34a64\"" Sep 6 00:17:18.659508 systemd[1]: Started cri-containerd-2d419e37e81166f9a6da14fe7f530503369192012f069514adbbc8a6f5e34a64.scope. Sep 6 00:17:18.726010 env[1193]: time="2025-09-06T00:17:18.724862556Z" level=info msg="StartContainer for \"2d419e37e81166f9a6da14fe7f530503369192012f069514adbbc8a6f5e34a64\" returns successfully" Sep 6 00:17:18.731456 env[1193]: time="2025-09-06T00:17:18.731383430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sb7p,Uid:85997cc3-3a99-4c56-9013-fe6c3001c54c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca39f6c042cc0c2c074bfcbff87e0087aa28109e118e1969901e7cd3a3ec63c6\"" Sep 6 00:17:18.732958 kubelet[1901]: E0906 00:17:18.732912 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:18.735860 env[1193]: time="2025-09-06T00:17:18.735800815Z" level=info msg="CreateContainer within sandbox \"ca39f6c042cc0c2c074bfcbff87e0087aa28109e118e1969901e7cd3a3ec63c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:17:18.758775 env[1193]: time="2025-09-06T00:17:18.758713829Z" level=info msg="CreateContainer within sandbox \"ca39f6c042cc0c2c074bfcbff87e0087aa28109e118e1969901e7cd3a3ec63c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a15105a5d612076457a8290f62233a47f7108ade3f8da6f1e1b72443b85238e7\"" Sep 6 00:17:18.759538 env[1193]: time="2025-09-06T00:17:18.759484967Z" level=info msg="StartContainer for \"a15105a5d612076457a8290f62233a47f7108ade3f8da6f1e1b72443b85238e7\"" Sep 6 00:17:18.792286 systemd[1]: Started cri-containerd-a15105a5d612076457a8290f62233a47f7108ade3f8da6f1e1b72443b85238e7.scope. Sep 6 00:17:18.865324 env[1193]: time="2025-09-06T00:17:18.865257097Z" level=info msg="StartContainer for \"a15105a5d612076457a8290f62233a47f7108ade3f8da6f1e1b72443b85238e7\" returns successfully" Sep 6 00:17:19.321167 kubelet[1901]: E0906 00:17:19.321113 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:19.327616 kubelet[1901]: E0906 00:17:19.327574 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:19.474023 kubelet[1901]: I0906 00:17:19.473923 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p7nzb" podStartSLOduration=29.473898907 podStartE2EDuration="29.473898907s" podCreationTimestamp="2025-09-06 00:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:19.473284275 +0000 UTC m=+33.686818539" watchObservedRunningTime="2025-09-06 00:17:19.473898907 +0000 UTC m=+33.687433179" Sep 6 00:17:19.523986 kubelet[1901]: I0906 00:17:19.523880 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8sb7p" podStartSLOduration=29.523855034 podStartE2EDuration="29.523855034s" podCreationTimestamp="2025-09-06 00:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:19.495556503 +0000 UTC m=+33.709090779" watchObservedRunningTime="2025-09-06 00:17:19.523855034 +0000 UTC m=+33.737389311" Sep 6 00:17:20.330135 kubelet[1901]: E0906 00:17:20.330064 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:20.330817 kubelet[1901]: E0906 00:17:20.330745 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:21.332300 kubelet[1901]: E0906 00:17:21.332212 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:21.333053 kubelet[1901]: E0906 00:17:21.333014 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:26.808429 systemd[1]: Started sshd@5-146.190.126.13:22-147.75.109.163:49316.service. Sep 6 00:17:26.876120 sshd[3268]: Accepted publickey for core from 147.75.109.163 port 49316 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:26.878564 sshd[3268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:26.887995 systemd[1]: Started session-6.scope. Sep 6 00:17:26.889211 systemd-logind[1186]: New session 6 of user core. Sep 6 00:17:27.139899 sshd[3268]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:27.146793 systemd-logind[1186]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:17:27.149277 systemd[1]: sshd@5-146.190.126.13:22-147.75.109.163:49316.service: Deactivated successfully. Sep 6 00:17:27.150563 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:17:27.152113 systemd-logind[1186]: Removed session 6. Sep 6 00:17:32.148929 systemd[1]: Started sshd@6-146.190.126.13:22-147.75.109.163:33544.service. Sep 6 00:17:32.203945 sshd[3280]: Accepted publickey for core from 147.75.109.163 port 33544 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:32.207912 sshd[3280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:32.216899 systemd[1]: Started session-7.scope. Sep 6 00:17:32.218398 systemd-logind[1186]: New session 7 of user core. Sep 6 00:17:32.404409 sshd[3280]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:32.409739 systemd-logind[1186]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:17:32.409969 systemd[1]: sshd@6-146.190.126.13:22-147.75.109.163:33544.service: Deactivated successfully. Sep 6 00:17:32.410738 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:17:32.412222 systemd-logind[1186]: Removed session 7. Sep 6 00:17:33.945283 systemd[1]: Started sshd@7-146.190.126.13:22-47.237.107.177:44726.service. Sep 6 00:17:34.730571 sshd[3294]: Invalid user from 47.237.107.177 port 44726 Sep 6 00:17:37.433931 systemd[1]: Started sshd@8-146.190.126.13:22-147.75.109.163:33554.service. Sep 6 00:17:37.498625 sshd[3297]: Accepted publickey for core from 147.75.109.163 port 33554 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:37.502138 sshd[3297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:37.513153 systemd[1]: Started session-8.scope. Sep 6 00:17:37.514445 systemd-logind[1186]: New session 8 of user core. Sep 6 00:17:37.706507 sshd[3297]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:37.713405 systemd-logind[1186]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:17:37.714541 systemd[1]: sshd@8-146.190.126.13:22-147.75.109.163:33554.service: Deactivated successfully. Sep 6 00:17:37.716235 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:17:37.720403 systemd-logind[1186]: Removed session 8. Sep 6 00:17:41.938836 sshd[3294]: Connection closed by invalid user 47.237.107.177 port 44726 [preauth] Sep 6 00:17:41.941392 systemd[1]: sshd@7-146.190.126.13:22-47.237.107.177:44726.service: Deactivated successfully. Sep 6 00:17:42.713704 systemd[1]: Started sshd@9-146.190.126.13:22-147.75.109.163:36624.service. Sep 6 00:17:42.769515 sshd[3311]: Accepted publickey for core from 147.75.109.163 port 36624 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:42.770988 sshd[3311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:42.783091 systemd[1]: Started session-9.scope. Sep 6 00:17:42.784380 systemd-logind[1186]: New session 9 of user core. Sep 6 00:17:42.952184 sshd[3311]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:42.956147 systemd-logind[1186]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:17:42.959012 systemd[1]: sshd@9-146.190.126.13:22-147.75.109.163:36624.service: Deactivated successfully. Sep 6 00:17:42.960088 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:17:42.962401 systemd-logind[1186]: Removed session 9. Sep 6 00:17:47.960676 systemd[1]: Started sshd@10-146.190.126.13:22-147.75.109.163:36636.service. Sep 6 00:17:48.008787 sshd[3326]: Accepted publickey for core from 147.75.109.163 port 36636 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:48.011924 sshd[3326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:48.019791 systemd-logind[1186]: New session 10 of user core. Sep 6 00:17:48.020459 systemd[1]: Started session-10.scope. Sep 6 00:17:48.183963 sshd[3326]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:48.192049 systemd[1]: Started sshd@11-146.190.126.13:22-147.75.109.163:36642.service. Sep 6 00:17:48.192834 systemd[1]: sshd@10-146.190.126.13:22-147.75.109.163:36636.service: Deactivated successfully. Sep 6 00:17:48.195144 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:17:48.196405 systemd-logind[1186]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:17:48.198437 systemd-logind[1186]: Removed session 10. Sep 6 00:17:48.246089 sshd[3338]: Accepted publickey for core from 147.75.109.163 port 36642 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:48.249143 sshd[3338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:48.256770 systemd[1]: Started session-11.scope. Sep 6 00:17:48.257854 systemd-logind[1186]: New session 11 of user core. Sep 6 00:17:48.500419 sshd[3338]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:48.509096 systemd[1]: Started sshd@12-146.190.126.13:22-147.75.109.163:36656.service. Sep 6 00:17:48.513657 systemd[1]: sshd@11-146.190.126.13:22-147.75.109.163:36642.service: Deactivated successfully. Sep 6 00:17:48.515002 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:17:48.521419 systemd-logind[1186]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:17:48.524536 systemd-logind[1186]: Removed session 11. Sep 6 00:17:48.575684 sshd[3351]: Accepted publickey for core from 147.75.109.163 port 36656 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:48.577730 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:48.584349 systemd[1]: Started session-12.scope. Sep 6 00:17:48.584350 systemd-logind[1186]: New session 12 of user core. Sep 6 00:17:48.735217 sshd[3351]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:48.738044 systemd-logind[1186]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:17:48.738896 systemd[1]: sshd@12-146.190.126.13:22-147.75.109.163:36656.service: Deactivated successfully. Sep 6 00:17:48.739701 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:17:48.740590 systemd-logind[1186]: Removed session 12. Sep 6 00:17:53.746700 systemd[1]: Started sshd@13-146.190.126.13:22-147.75.109.163:47294.service. Sep 6 00:17:53.804647 sshd[3366]: Accepted publickey for core from 147.75.109.163 port 47294 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:53.808290 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:53.818345 systemd-logind[1186]: New session 13 of user core. Sep 6 00:17:53.819916 systemd[1]: Started session-13.scope. Sep 6 00:17:54.006282 sshd[3366]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:54.012411 systemd[1]: sshd@13-146.190.126.13:22-147.75.109.163:47294.service: Deactivated successfully. Sep 6 00:17:54.013291 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:17:54.014506 systemd-logind[1186]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:17:54.016272 systemd-logind[1186]: Removed session 13. Sep 6 00:17:57.127537 kubelet[1901]: E0906 00:17:57.127460 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:17:59.016639 systemd[1]: Started sshd@14-146.190.126.13:22-147.75.109.163:47296.service. Sep 6 00:17:59.068112 sshd[3378]: Accepted publickey for core from 147.75.109.163 port 47296 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:59.071931 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:59.078047 systemd[1]: Started session-14.scope. Sep 6 00:17:59.079336 systemd-logind[1186]: New session 14 of user core. Sep 6 00:17:59.234633 sshd[3378]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:59.250604 systemd[1]: Started sshd@15-146.190.126.13:22-147.75.109.163:47306.service. Sep 6 00:17:59.253153 systemd[1]: sshd@14-146.190.126.13:22-147.75.109.163:47296.service: Deactivated successfully. Sep 6 00:17:59.254163 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:17:59.255949 systemd-logind[1186]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:17:59.257179 systemd-logind[1186]: Removed session 14. Sep 6 00:17:59.303555 sshd[3389]: Accepted publickey for core from 147.75.109.163 port 47306 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:59.306519 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:59.313217 systemd-logind[1186]: New session 15 of user core. Sep 6 00:17:59.314184 systemd[1]: Started session-15.scope. Sep 6 00:17:59.804468 sshd[3389]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:59.812115 systemd[1]: Started sshd@16-146.190.126.13:22-147.75.109.163:47320.service. Sep 6 00:17:59.819987 systemd-logind[1186]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:17:59.820789 systemd[1]: sshd@15-146.190.126.13:22-147.75.109.163:47306.service: Deactivated successfully. Sep 6 00:17:59.822000 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:17:59.823675 systemd-logind[1186]: Removed session 15. Sep 6 00:17:59.881901 sshd[3398]: Accepted publickey for core from 147.75.109.163 port 47320 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:59.884429 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:59.898546 systemd[1]: Started session-16.scope. Sep 6 00:17:59.899691 systemd-logind[1186]: New session 16 of user core. Sep 6 00:18:00.830934 sshd[3398]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:00.839505 systemd[1]: sshd@16-146.190.126.13:22-147.75.109.163:47320.service: Deactivated successfully. Sep 6 00:18:00.840565 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:18:00.844703 systemd-logind[1186]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:18:00.847272 systemd[1]: Started sshd@17-146.190.126.13:22-147.75.109.163:57358.service. Sep 6 00:18:00.856696 systemd-logind[1186]: Removed session 16. Sep 6 00:18:00.901537 sshd[3417]: Accepted publickey for core from 147.75.109.163 port 57358 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:00.904271 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:00.913788 systemd-logind[1186]: New session 17 of user core. Sep 6 00:18:00.914500 systemd[1]: Started session-17.scope. Sep 6 00:18:01.324765 sshd[3417]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:01.334183 systemd[1]: sshd@17-146.190.126.13:22-147.75.109.163:57358.service: Deactivated successfully. Sep 6 00:18:01.336529 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:18:01.340763 systemd-logind[1186]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:18:01.346396 systemd[1]: Started sshd@18-146.190.126.13:22-147.75.109.163:57372.service. Sep 6 00:18:01.350089 systemd-logind[1186]: Removed session 17. Sep 6 00:18:01.406798 sshd[3430]: Accepted publickey for core from 147.75.109.163 port 57372 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:01.409274 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:01.418408 systemd-logind[1186]: New session 18 of user core. Sep 6 00:18:01.418972 systemd[1]: Started session-18.scope. Sep 6 00:18:01.591160 sshd[3430]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:01.596353 systemd[1]: sshd@18-146.190.126.13:22-147.75.109.163:57372.service: Deactivated successfully. Sep 6 00:18:01.597445 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:18:01.599579 systemd-logind[1186]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:18:01.602041 systemd-logind[1186]: Removed session 18. Sep 6 00:18:04.128571 kubelet[1901]: E0906 00:18:04.128522 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:04.130548 kubelet[1901]: E0906 00:18:04.128630 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:06.601460 systemd[1]: Started sshd@19-146.190.126.13:22-147.75.109.163:57384.service. Sep 6 00:18:06.656416 sshd[3442]: Accepted publickey for core from 147.75.109.163 port 57384 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:06.658454 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:06.665030 systemd-logind[1186]: New session 19 of user core. Sep 6 00:18:06.665951 systemd[1]: Started session-19.scope. Sep 6 00:18:06.822225 sshd[3442]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:06.826123 systemd-logind[1186]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:18:06.826650 systemd[1]: sshd@19-146.190.126.13:22-147.75.109.163:57384.service: Deactivated successfully. Sep 6 00:18:06.827813 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:18:06.829667 systemd-logind[1186]: Removed session 19. Sep 6 00:18:11.831530 systemd[1]: Started sshd@20-146.190.126.13:22-147.75.109.163:36450.service. Sep 6 00:18:11.882192 sshd[3457]: Accepted publickey for core from 147.75.109.163 port 36450 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:11.884789 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:11.896023 systemd-logind[1186]: New session 20 of user core. Sep 6 00:18:11.896582 systemd[1]: Started session-20.scope. Sep 6 00:18:12.052771 sshd[3457]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:12.057130 systemd[1]: sshd@20-146.190.126.13:22-147.75.109.163:36450.service: Deactivated successfully. Sep 6 00:18:12.057970 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:18:12.058920 systemd-logind[1186]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:18:12.060084 systemd-logind[1186]: Removed session 20. Sep 6 00:18:17.064049 systemd[1]: Started sshd@21-146.190.126.13:22-147.75.109.163:36454.service. Sep 6 00:18:17.121547 sshd[3469]: Accepted publickey for core from 147.75.109.163 port 36454 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:17.123674 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:17.131150 systemd[1]: Started session-21.scope. Sep 6 00:18:17.132227 systemd-logind[1186]: New session 21 of user core. Sep 6 00:18:17.265325 sshd[3469]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:17.269864 systemd[1]: sshd@21-146.190.126.13:22-147.75.109.163:36454.service: Deactivated successfully. Sep 6 00:18:17.270877 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:18:17.272678 systemd-logind[1186]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:18:17.274303 systemd-logind[1186]: Removed session 21. Sep 6 00:18:19.127293 kubelet[1901]: E0906 00:18:19.127221 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:22.274825 systemd[1]: Started sshd@22-146.190.126.13:22-147.75.109.163:55266.service. Sep 6 00:18:22.328287 sshd[3486]: Accepted publickey for core from 147.75.109.163 port 55266 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:22.330919 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:22.338375 systemd[1]: Started session-22.scope. Sep 6 00:18:22.339358 systemd-logind[1186]: New session 22 of user core. Sep 6 00:18:22.507208 sshd[3486]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:22.512345 systemd-logind[1186]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:18:22.512458 systemd[1]: sshd@22-146.190.126.13:22-147.75.109.163:55266.service: Deactivated successfully. Sep 6 00:18:22.513509 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:18:22.515147 systemd-logind[1186]: Removed session 22. Sep 6 00:18:23.127606 kubelet[1901]: E0906 00:18:23.127549 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:26.127056 kubelet[1901]: E0906 00:18:26.127000 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:27.127041 kubelet[1901]: E0906 00:18:27.126954 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:27.515223 systemd[1]: Started sshd@23-146.190.126.13:22-147.75.109.163:55276.service. Sep 6 00:18:27.571476 sshd[3498]: Accepted publickey for core from 147.75.109.163 port 55276 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:27.578104 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:27.585087 systemd-logind[1186]: New session 23 of user core. Sep 6 00:18:27.585709 systemd[1]: Started session-23.scope. Sep 6 00:18:27.758731 sshd[3498]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:27.764393 systemd-logind[1186]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:18:27.764857 systemd[1]: sshd@23-146.190.126.13:22-147.75.109.163:55276.service: Deactivated successfully. Sep 6 00:18:27.766069 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:18:27.769557 systemd-logind[1186]: Removed session 23. Sep 6 00:18:32.766739 systemd[1]: Started sshd@24-146.190.126.13:22-147.75.109.163:58352.service. Sep 6 00:18:32.813428 sshd[3511]: Accepted publickey for core from 147.75.109.163 port 58352 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:32.816410 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:32.823927 systemd[1]: Started session-24.scope. Sep 6 00:18:32.825464 systemd-logind[1186]: New session 24 of user core. Sep 6 00:18:32.994319 sshd[3511]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:33.002778 systemd[1]: Started sshd@25-146.190.126.13:22-147.75.109.163:58368.service. Sep 6 00:18:33.003981 systemd[1]: sshd@24-146.190.126.13:22-147.75.109.163:58352.service: Deactivated successfully. Sep 6 00:18:33.005706 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:18:33.015953 systemd-logind[1186]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:18:33.017693 systemd-logind[1186]: Removed session 24. Sep 6 00:18:33.062620 sshd[3521]: Accepted publickey for core from 147.75.109.163 port 58368 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:33.065126 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:33.077925 systemd[1]: Started session-25.scope. Sep 6 00:18:33.078775 systemd-logind[1186]: New session 25 of user core. Sep 6 00:18:35.434605 systemd[1]: run-containerd-runc-k8s.io-7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492-runc.vvADEC.mount: Deactivated successfully. Sep 6 00:18:35.471512 env[1193]: time="2025-09-06T00:18:35.471443027Z" level=info msg="StopContainer for \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\" with timeout 30 (s)" Sep 6 00:18:35.472918 env[1193]: time="2025-09-06T00:18:35.472853073Z" level=info msg="Stop container \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\" with signal terminated" Sep 6 00:18:35.511698 env[1193]: time="2025-09-06T00:18:35.511603698Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:18:35.516434 systemd[1]: cri-containerd-2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca.scope: Deactivated successfully. Sep 6 00:18:35.527675 env[1193]: time="2025-09-06T00:18:35.527623195Z" level=info msg="StopContainer for \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\" with timeout 2 (s)" Sep 6 00:18:35.528052 env[1193]: time="2025-09-06T00:18:35.528013846Z" level=info msg="Stop container \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\" with signal terminated" Sep 6 00:18:35.538128 systemd-networkd[1005]: lxc_health: Link DOWN Sep 6 00:18:35.538138 systemd-networkd[1005]: lxc_health: Lost carrier Sep 6 00:18:35.581956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca-rootfs.mount: Deactivated successfully. Sep 6 00:18:35.584739 systemd[1]: cri-containerd-7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492.scope: Deactivated successfully. Sep 6 00:18:35.585086 systemd[1]: cri-containerd-7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492.scope: Consumed 10.678s CPU time. Sep 6 00:18:35.597098 env[1193]: time="2025-09-06T00:18:35.597011271Z" level=info msg="shim disconnected" id=2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca Sep 6 00:18:35.597098 env[1193]: time="2025-09-06T00:18:35.597082289Z" level=warning msg="cleaning up after shim disconnected" id=2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca namespace=k8s.io Sep 6 00:18:35.597098 env[1193]: time="2025-09-06T00:18:35.597099802Z" level=info msg="cleaning up dead shim" Sep 6 00:18:35.623432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492-rootfs.mount: Deactivated successfully. Sep 6 00:18:35.632697 env[1193]: time="2025-09-06T00:18:35.632634521Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3578 runtime=io.containerd.runc.v2\n" Sep 6 00:18:35.638984 env[1193]: time="2025-09-06T00:18:35.638915771Z" level=info msg="StopContainer for \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\" returns successfully" Sep 6 00:18:35.639381 env[1193]: time="2025-09-06T00:18:35.639303664Z" level=info msg="shim disconnected" id=7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492 Sep 6 00:18:35.639381 env[1193]: time="2025-09-06T00:18:35.639361166Z" level=warning msg="cleaning up after shim disconnected" id=7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492 namespace=k8s.io Sep 6 00:18:35.639381 env[1193]: time="2025-09-06T00:18:35.639375320Z" level=info msg="cleaning up dead shim" Sep 6 00:18:35.654551 env[1193]: time="2025-09-06T00:18:35.654505306Z" level=info msg="StopPodSandbox for \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\"" Sep 6 00:18:35.655138 env[1193]: time="2025-09-06T00:18:35.655105778Z" level=info msg="Container to stop \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:35.668015 env[1193]: time="2025-09-06T00:18:35.667946529Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3603 runtime=io.containerd.runc.v2\n" Sep 6 00:18:35.669644 systemd[1]: cri-containerd-f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab.scope: Deactivated successfully. Sep 6 00:18:35.673100 env[1193]: time="2025-09-06T00:18:35.673020706Z" level=info msg="StopContainer for \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\" returns successfully" Sep 6 00:18:35.674452 env[1193]: time="2025-09-06T00:18:35.674403866Z" level=info msg="StopPodSandbox for \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\"" Sep 6 00:18:35.674804 env[1193]: time="2025-09-06T00:18:35.674769433Z" level=info msg="Container to stop \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:35.674984 env[1193]: time="2025-09-06T00:18:35.674950747Z" level=info msg="Container to stop \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:35.675132 env[1193]: time="2025-09-06T00:18:35.675103009Z" level=info msg="Container to stop \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:35.675311 env[1193]: time="2025-09-06T00:18:35.675233631Z" level=info msg="Container to stop \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:35.675451 env[1193]: time="2025-09-06T00:18:35.675421571Z" level=info msg="Container to stop \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:35.684584 systemd[1]: cri-containerd-744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b.scope: Deactivated successfully. Sep 6 00:18:35.716313 env[1193]: time="2025-09-06T00:18:35.714765903Z" level=info msg="shim disconnected" id=f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab Sep 6 00:18:35.717528 env[1193]: time="2025-09-06T00:18:35.717478225Z" level=warning msg="cleaning up after shim disconnected" id=f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab namespace=k8s.io Sep 6 00:18:35.717706 env[1193]: time="2025-09-06T00:18:35.717684930Z" level=info msg="cleaning up dead shim" Sep 6 00:18:35.731084 env[1193]: time="2025-09-06T00:18:35.731011305Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3647 runtime=io.containerd.runc.v2\n" Sep 6 00:18:35.731516 env[1193]: time="2025-09-06T00:18:35.731453972Z" level=info msg="TearDown network for sandbox \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\" successfully" Sep 6 00:18:35.731516 env[1193]: time="2025-09-06T00:18:35.731496309Z" level=info msg="StopPodSandbox for \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\" returns successfully" Sep 6 00:18:35.748279 env[1193]: time="2025-09-06T00:18:35.747906943Z" level=info msg="shim disconnected" id=744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b Sep 6 00:18:35.748279 env[1193]: time="2025-09-06T00:18:35.748077691Z" level=warning msg="cleaning up after shim disconnected" id=744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b namespace=k8s.io Sep 6 00:18:35.748279 env[1193]: time="2025-09-06T00:18:35.748098175Z" level=info msg="cleaning up dead shim" Sep 6 00:18:35.768094 env[1193]: time="2025-09-06T00:18:35.768032110Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3667 runtime=io.containerd.runc.v2\n" Sep 6 00:18:35.768526 env[1193]: time="2025-09-06T00:18:35.768473444Z" level=info msg="TearDown network for sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" successfully" Sep 6 00:18:35.768526 env[1193]: time="2025-09-06T00:18:35.768508095Z" level=info msg="StopPodSandbox for \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" returns successfully" Sep 6 00:18:35.845927 kubelet[1901]: I0906 00:18:35.845852 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-hostproc\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.846717 kubelet[1901]: I0906 00:18:35.846682 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-cgroup\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.846904 kubelet[1901]: I0906 00:18:35.846873 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab-cilium-config-path\") pod \"c7d4fb1f-cf2e-482a-9415-1a469c1c52ab\" (UID: \"c7d4fb1f-cf2e-482a-9415-1a469c1c52ab\") " Sep 6 00:18:35.847055 kubelet[1901]: I0906 00:18:35.847032 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgxmc\" (UniqueName: \"kubernetes.io/projected/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab-kube-api-access-wgxmc\") pod \"c7d4fb1f-cf2e-482a-9415-1a469c1c52ab\" (UID: \"c7d4fb1f-cf2e-482a-9415-1a469c1c52ab\") " Sep 6 00:18:35.847205 kubelet[1901]: I0906 00:18:35.847182 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-hubble-tls\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.847368 kubelet[1901]: I0906 00:18:35.847347 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-config-path\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.847725 kubelet[1901]: I0906 00:18:35.847510 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-xtables-lock\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.847725 kubelet[1901]: I0906 00:18:35.847546 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-etc-cni-netd\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.847725 kubelet[1901]: I0906 00:18:35.847575 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cni-path\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.847725 kubelet[1901]: I0906 00:18:35.847598 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-run\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.847993 kubelet[1901]: I0906 00:18:35.847629 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-clustermesh-secrets\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.848142 kubelet[1901]: I0906 00:18:35.848115 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-host-proc-sys-kernel\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.848310 kubelet[1901]: I0906 00:18:35.848280 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-lib-modules\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.848473 kubelet[1901]: I0906 00:18:35.848445 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksqhm\" (UniqueName: \"kubernetes.io/projected/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-kube-api-access-ksqhm\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.848621 kubelet[1901]: I0906 00:18:35.848594 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-bpf-maps\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.848756 kubelet[1901]: I0906 00:18:35.848734 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-host-proc-sys-net\") pod \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\" (UID: \"b12cbd82-a2f4-49a2-90f6-a2132dc55fbc\") " Sep 6 00:18:35.848948 kubelet[1901]: I0906 00:18:35.848922 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.849065 kubelet[1901]: I0906 00:18:35.849050 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.850125 kubelet[1901]: I0906 00:18:35.847790 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-hostproc" (OuterVolumeSpecName: "hostproc") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.850223 kubelet[1901]: I0906 00:18:35.850171 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cni-path" (OuterVolumeSpecName: "cni-path") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.850318 kubelet[1901]: I0906 00:18:35.850221 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.851462 kubelet[1901]: I0906 00:18:35.851421 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.856460 kubelet[1901]: I0906 00:18:35.851589 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.860460 kubelet[1901]: I0906 00:18:35.853640 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.865450 kubelet[1901]: I0906 00:18:35.853659 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.865720 kubelet[1901]: I0906 00:18:35.853964 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:35.865870 kubelet[1901]: I0906 00:18:35.856374 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7d4fb1f-cf2e-482a-9415-1a469c1c52ab" (UID: "c7d4fb1f-cf2e-482a-9415-1a469c1c52ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:18:35.866024 kubelet[1901]: I0906 00:18:35.860473 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:18:35.866109 kubelet[1901]: I0906 00:18:35.863342 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-kube-api-access-ksqhm" (OuterVolumeSpecName: "kube-api-access-ksqhm") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "kube-api-access-ksqhm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:18:35.866109 kubelet[1901]: I0906 00:18:35.865405 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:18:35.866109 kubelet[1901]: I0906 00:18:35.865563 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab-kube-api-access-wgxmc" (OuterVolumeSpecName: "kube-api-access-wgxmc") pod "c7d4fb1f-cf2e-482a-9415-1a469c1c52ab" (UID: "c7d4fb1f-cf2e-482a-9415-1a469c1c52ab"). InnerVolumeSpecName "kube-api-access-wgxmc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:18:35.866994 kubelet[1901]: I0906 00:18:35.866956 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" (UID: "b12cbd82-a2f4-49a2-90f6-a2132dc55fbc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:18:35.949593 kubelet[1901]: I0906 00:18:35.949523 1901 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab-cilium-config-path\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949593 kubelet[1901]: I0906 00:18:35.949584 1901 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wgxmc\" (UniqueName: \"kubernetes.io/projected/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab-kube-api-access-wgxmc\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949593 kubelet[1901]: I0906 00:18:35.949610 1901 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-hubble-tls\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949958 kubelet[1901]: I0906 00:18:35.949627 1901 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-config-path\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949958 kubelet[1901]: I0906 00:18:35.949643 1901 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-xtables-lock\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949958 kubelet[1901]: I0906 00:18:35.949658 1901 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-etc-cni-netd\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949958 kubelet[1901]: I0906 00:18:35.949673 1901 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cni-path\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949958 kubelet[1901]: I0906 00:18:35.949689 1901 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-run\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949958 kubelet[1901]: I0906 00:18:35.949703 1901 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-clustermesh-secrets\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949958 kubelet[1901]: I0906 00:18:35.949719 1901 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.949958 kubelet[1901]: I0906 00:18:35.949733 1901 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-lib-modules\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.950442 kubelet[1901]: I0906 00:18:35.949747 1901 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ksqhm\" (UniqueName: \"kubernetes.io/projected/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-kube-api-access-ksqhm\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.950442 kubelet[1901]: I0906 00:18:35.949761 1901 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-bpf-maps\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.950442 kubelet[1901]: I0906 00:18:35.949777 1901 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-host-proc-sys-net\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.950442 kubelet[1901]: I0906 00:18:35.949848 1901 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-hostproc\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:35.950442 kubelet[1901]: I0906 00:18:35.949880 1901 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc-cilium-cgroup\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:36.136981 systemd[1]: Removed slice kubepods-burstable-podb12cbd82_a2f4_49a2_90f6_a2132dc55fbc.slice. Sep 6 00:18:36.137115 systemd[1]: kubepods-burstable-podb12cbd82_a2f4_49a2_90f6_a2132dc55fbc.slice: Consumed 10.822s CPU time. Sep 6 00:18:36.139735 systemd[1]: Removed slice kubepods-besteffort-podc7d4fb1f_cf2e_482a_9415_1a469c1c52ab.slice. Sep 6 00:18:36.215704 kubelet[1901]: E0906 00:18:36.212814 1901 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:18:36.419315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab-rootfs.mount: Deactivated successfully. Sep 6 00:18:36.419483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab-shm.mount: Deactivated successfully. Sep 6 00:18:36.419572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b-rootfs.mount: Deactivated successfully. Sep 6 00:18:36.419660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b-shm.mount: Deactivated successfully. Sep 6 00:18:36.419755 systemd[1]: var-lib-kubelet-pods-c7d4fb1f\x2dcf2e\x2d482a\x2d9415\x2d1a469c1c52ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwgxmc.mount: Deactivated successfully. Sep 6 00:18:36.419855 systemd[1]: var-lib-kubelet-pods-b12cbd82\x2da2f4\x2d49a2\x2d90f6\x2da2132dc55fbc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dksqhm.mount: Deactivated successfully. Sep 6 00:18:36.419960 systemd[1]: var-lib-kubelet-pods-b12cbd82\x2da2f4\x2d49a2\x2d90f6\x2da2132dc55fbc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:18:36.420056 systemd[1]: var-lib-kubelet-pods-b12cbd82\x2da2f4\x2d49a2\x2d90f6\x2da2132dc55fbc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:18:36.574281 kubelet[1901]: I0906 00:18:36.574215 1901 scope.go:117] "RemoveContainer" containerID="7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492" Sep 6 00:18:36.586437 env[1193]: time="2025-09-06T00:18:36.584783702Z" level=info msg="RemoveContainer for \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\"" Sep 6 00:18:36.596342 env[1193]: time="2025-09-06T00:18:36.596183909Z" level=info msg="RemoveContainer for \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\" returns successfully" Sep 6 00:18:36.597113 kubelet[1901]: I0906 00:18:36.597071 1901 scope.go:117] "RemoveContainer" containerID="d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9" Sep 6 00:18:36.599761 env[1193]: time="2025-09-06T00:18:36.599697607Z" level=info msg="RemoveContainer for \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\"" Sep 6 00:18:36.606424 env[1193]: time="2025-09-06T00:18:36.606355180Z" level=info msg="RemoveContainer for \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\" returns successfully" Sep 6 00:18:36.609048 kubelet[1901]: I0906 00:18:36.609006 1901 scope.go:117] "RemoveContainer" containerID="84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494" Sep 6 00:18:36.613686 env[1193]: time="2025-09-06T00:18:36.613634433Z" level=info msg="RemoveContainer for \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\"" Sep 6 00:18:36.619360 env[1193]: time="2025-09-06T00:18:36.619290726Z" level=info msg="RemoveContainer for \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\" returns successfully" Sep 6 00:18:36.620222 kubelet[1901]: I0906 00:18:36.620110 1901 scope.go:117] "RemoveContainer" containerID="9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68" Sep 6 00:18:36.622735 env[1193]: time="2025-09-06T00:18:36.622664302Z" level=info msg="RemoveContainer for \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\"" Sep 6 00:18:36.635423 env[1193]: time="2025-09-06T00:18:36.635317257Z" level=info msg="RemoveContainer for \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\" returns successfully" Sep 6 00:18:36.636139 kubelet[1901]: I0906 00:18:36.636003 1901 scope.go:117] "RemoveContainer" containerID="5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2" Sep 6 00:18:36.638965 env[1193]: time="2025-09-06T00:18:36.638900227Z" level=info msg="RemoveContainer for \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\"" Sep 6 00:18:36.646441 env[1193]: time="2025-09-06T00:18:36.646374265Z" level=info msg="RemoveContainer for \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\" returns successfully" Sep 6 00:18:36.647012 kubelet[1901]: I0906 00:18:36.646894 1901 scope.go:117] "RemoveContainer" containerID="7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492" Sep 6 00:18:36.649463 env[1193]: time="2025-09-06T00:18:36.649282318Z" level=error msg="ContainerStatus for \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\": not found" Sep 6 00:18:36.650416 kubelet[1901]: E0906 00:18:36.649824 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\": not found" containerID="7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492" Sep 6 00:18:36.650416 kubelet[1901]: I0906 00:18:36.649915 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492"} err="failed to get container status \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a195235f0a5adf80642c36b3167d32f35c6731affad8230dfbc5095ce42a492\": not found" Sep 6 00:18:36.650416 kubelet[1901]: I0906 00:18:36.650109 1901 scope.go:117] "RemoveContainer" containerID="d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9" Sep 6 00:18:36.651291 env[1193]: time="2025-09-06T00:18:36.651145512Z" level=error msg="ContainerStatus for \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\": not found" Sep 6 00:18:36.651887 kubelet[1901]: E0906 00:18:36.651661 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\": not found" containerID="d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9" Sep 6 00:18:36.651887 kubelet[1901]: I0906 00:18:36.651718 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9"} err="failed to get container status \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d162208f1256471f39eabf4e77a1f5883c9f9cba479e4ea98096cc36e0a85eb9\": not found" Sep 6 00:18:36.651887 kubelet[1901]: I0906 00:18:36.651743 1901 scope.go:117] "RemoveContainer" containerID="84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494" Sep 6 00:18:36.652707 env[1193]: time="2025-09-06T00:18:36.652602215Z" level=error msg="ContainerStatus for \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\": not found" Sep 6 00:18:36.653266 kubelet[1901]: E0906 00:18:36.653049 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\": not found" containerID="84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494" Sep 6 00:18:36.653266 kubelet[1901]: I0906 00:18:36.653096 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494"} err="failed to get container status \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\": rpc error: code = NotFound desc = an error occurred when try to find container \"84968e580a70dad9a0d5ceb901067785217bc50decae328cceaa440c74c68494\": not found" Sep 6 00:18:36.653266 kubelet[1901]: I0906 00:18:36.653121 1901 scope.go:117] "RemoveContainer" containerID="9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68" Sep 6 00:18:36.653573 env[1193]: time="2025-09-06T00:18:36.653451631Z" level=error msg="ContainerStatus for \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\": not found" Sep 6 00:18:36.654214 kubelet[1901]: E0906 00:18:36.653956 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\": not found" containerID="9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68" Sep 6 00:18:36.654214 kubelet[1901]: I0906 00:18:36.653993 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68"} err="failed to get container status \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\": rpc error: code = NotFound desc = an error occurred when try to find container \"9cd1eaf8ff4f579e20e0ed43984b37db070c6543e57bab38279d9e2c227dbd68\": not found" Sep 6 00:18:36.654214 kubelet[1901]: I0906 00:18:36.654066 1901 scope.go:117] "RemoveContainer" containerID="5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2" Sep 6 00:18:36.655228 env[1193]: time="2025-09-06T00:18:36.655118485Z" level=error msg="ContainerStatus for \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\": not found" Sep 6 00:18:36.655923 kubelet[1901]: E0906 00:18:36.655700 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\": not found" containerID="5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2" Sep 6 00:18:36.655923 kubelet[1901]: I0906 00:18:36.655756 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2"} err="failed to get container status \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b96408dac097735e2932040d2cd100d3fbe4e98cb19e66a530a09f6859434d2\": not found" Sep 6 00:18:36.655923 kubelet[1901]: I0906 00:18:36.655780 1901 scope.go:117] "RemoveContainer" containerID="2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca" Sep 6 00:18:36.658229 env[1193]: time="2025-09-06T00:18:36.658159856Z" level=info msg="RemoveContainer for \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\"" Sep 6 00:18:36.666010 env[1193]: time="2025-09-06T00:18:36.665938880Z" level=info msg="RemoveContainer for \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\" returns successfully" Sep 6 00:18:36.666927 kubelet[1901]: I0906 00:18:36.666744 1901 scope.go:117] "RemoveContainer" containerID="2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca" Sep 6 00:18:36.667338 env[1193]: time="2025-09-06T00:18:36.667191760Z" level=error msg="ContainerStatus for \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\": not found" Sep 6 00:18:36.667715 kubelet[1901]: E0906 00:18:36.667613 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\": not found" containerID="2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca" Sep 6 00:18:36.667715 kubelet[1901]: I0906 00:18:36.667678 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca"} err="failed to get container status \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b2fb6b93664c341a18bb08d546b68dd5450e7303228d5ce87322b7af7c37cca\": not found" Sep 6 00:18:37.298795 sshd[3521]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:37.305552 systemd[1]: sshd@25-146.190.126.13:22-147.75.109.163:58368.service: Deactivated successfully. Sep 6 00:18:37.308013 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:18:37.308980 systemd[1]: session-25.scope: Consumed 1.515s CPU time. Sep 6 00:18:37.310812 systemd-logind[1186]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:18:37.315529 systemd[1]: Started sshd@26-146.190.126.13:22-147.75.109.163:58374.service. Sep 6 00:18:37.318413 systemd-logind[1186]: Removed session 25. Sep 6 00:18:37.371979 sshd[3688]: Accepted publickey for core from 147.75.109.163 port 58374 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:37.374767 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:37.383597 systemd[1]: Started session-26.scope. Sep 6 00:18:37.384407 systemd-logind[1186]: New session 26 of user core. Sep 6 00:18:37.966700 sshd[3688]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:37.977536 systemd[1]: Started sshd@27-146.190.126.13:22-147.75.109.163:58378.service. Sep 6 00:18:37.978352 systemd[1]: sshd@26-146.190.126.13:22-147.75.109.163:58374.service: Deactivated successfully. Sep 6 00:18:37.979303 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:18:37.982761 systemd-logind[1186]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:18:37.984605 systemd-logind[1186]: Removed session 26. Sep 6 00:18:38.020127 kubelet[1901]: I0906 00:18:38.020070 1901 memory_manager.go:355] "RemoveStaleState removing state" podUID="c7d4fb1f-cf2e-482a-9415-1a469c1c52ab" containerName="cilium-operator" Sep 6 00:18:38.020127 kubelet[1901]: I0906 00:18:38.020114 1901 memory_manager.go:355] "RemoveStaleState removing state" podUID="b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" containerName="cilium-agent" Sep 6 00:18:38.026838 sshd[3697]: Accepted publickey for core from 147.75.109.163 port 58378 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:38.029467 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:38.040656 systemd-logind[1186]: New session 27 of user core. Sep 6 00:18:38.043810 systemd[1]: Started session-27.scope. Sep 6 00:18:38.054952 systemd[1]: Created slice kubepods-burstable-podc46c5332_f568_4f13_a09e_155312a264be.slice. Sep 6 00:18:38.135163 kubelet[1901]: I0906 00:18:38.135046 1901 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12cbd82-a2f4-49a2-90f6-a2132dc55fbc" path="/var/lib/kubelet/pods/b12cbd82-a2f4-49a2-90f6-a2132dc55fbc/volumes" Sep 6 00:18:38.136976 kubelet[1901]: I0906 00:18:38.136934 1901 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d4fb1f-cf2e-482a-9415-1a469c1c52ab" path="/var/lib/kubelet/pods/c7d4fb1f-cf2e-482a-9415-1a469c1c52ab/volumes" Sep 6 00:18:38.169259 kubelet[1901]: I0906 00:18:38.169183 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-bpf-maps\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.169632 kubelet[1901]: I0906 00:18:38.169595 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c46c5332-f568-4f13-a09e-155312a264be-hubble-tls\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.169811 kubelet[1901]: I0906 00:18:38.169789 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-etc-cni-netd\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.169924 kubelet[1901]: I0906 00:18:38.169907 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c46c5332-f568-4f13-a09e-155312a264be-cilium-ipsec-secrets\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.170035 kubelet[1901]: I0906 00:18:38.170018 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cni-path\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.170122 kubelet[1901]: I0906 00:18:38.170105 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-hostproc\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.170264 kubelet[1901]: I0906 00:18:38.170224 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cilium-cgroup\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.170486 kubelet[1901]: I0906 00:18:38.170469 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-lib-modules\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.170592 kubelet[1901]: I0906 00:18:38.170572 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c46c5332-f568-4f13-a09e-155312a264be-clustermesh-secrets\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.170725 kubelet[1901]: I0906 00:18:38.170709 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-host-proc-sys-net\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.170825 kubelet[1901]: I0906 00:18:38.170809 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l4tw\" (UniqueName: \"kubernetes.io/projected/c46c5332-f568-4f13-a09e-155312a264be-kube-api-access-7l4tw\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.170936 kubelet[1901]: I0906 00:18:38.170922 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cilium-run\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.171037 kubelet[1901]: I0906 00:18:38.171023 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-xtables-lock\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.171124 kubelet[1901]: I0906 00:18:38.171111 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c46c5332-f568-4f13-a09e-155312a264be-cilium-config-path\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.171318 kubelet[1901]: I0906 00:18:38.171297 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-host-proc-sys-kernel\") pod \"cilium-9k66w\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " pod="kube-system/cilium-9k66w" Sep 6 00:18:38.322542 sshd[3697]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:38.334140 systemd[1]: Started sshd@28-146.190.126.13:22-147.75.109.163:58384.service. Sep 6 00:18:38.343783 systemd[1]: sshd@27-146.190.126.13:22-147.75.109.163:58378.service: Deactivated successfully. Sep 6 00:18:38.344736 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:18:38.347721 systemd-logind[1186]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:18:38.354706 systemd-logind[1186]: Removed session 27. Sep 6 00:18:38.360728 kubelet[1901]: E0906 00:18:38.360367 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:38.366844 env[1193]: time="2025-09-06T00:18:38.365826032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9k66w,Uid:c46c5332-f568-4f13-a09e-155312a264be,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:38.394450 sshd[3712]: Accepted publickey for core from 147.75.109.163 port 58384 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:38.401556 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:38.411227 systemd-logind[1186]: New session 28 of user core. Sep 6 00:18:38.412882 systemd[1]: Started session-28.scope. Sep 6 00:18:38.415139 env[1193]: time="2025-09-06T00:18:38.415026042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:38.415527 env[1193]: time="2025-09-06T00:18:38.415461106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:38.415674 env[1193]: time="2025-09-06T00:18:38.415640957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:38.416120 env[1193]: time="2025-09-06T00:18:38.416066940Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd pid=3722 runtime=io.containerd.runc.v2 Sep 6 00:18:38.460103 systemd[1]: Started cri-containerd-1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd.scope. Sep 6 00:18:38.534781 env[1193]: time="2025-09-06T00:18:38.534711005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9k66w,Uid:c46c5332-f568-4f13-a09e-155312a264be,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\"" Sep 6 00:18:38.537581 kubelet[1901]: E0906 00:18:38.536458 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:38.543325 env[1193]: time="2025-09-06T00:18:38.543162727Z" level=info msg="CreateContainer within sandbox \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:18:38.578911 env[1193]: time="2025-09-06T00:18:38.573931504Z" level=info msg="CreateContainer within sandbox \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831\"" Sep 6 00:18:38.580696 env[1193]: time="2025-09-06T00:18:38.580640276Z" level=info msg="StartContainer for \"6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831\"" Sep 6 00:18:38.629130 systemd[1]: Started cri-containerd-6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831.scope. Sep 6 00:18:38.650610 systemd[1]: cri-containerd-6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831.scope: Deactivated successfully. Sep 6 00:18:38.673611 env[1193]: time="2025-09-06T00:18:38.673508397Z" level=info msg="shim disconnected" id=6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831 Sep 6 00:18:38.673611 env[1193]: time="2025-09-06T00:18:38.673593888Z" level=warning msg="cleaning up after shim disconnected" id=6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831 namespace=k8s.io Sep 6 00:18:38.673611 env[1193]: time="2025-09-06T00:18:38.673609547Z" level=info msg="cleaning up dead shim" Sep 6 00:18:38.684977 env[1193]: time="2025-09-06T00:18:38.684890710Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3789 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:18:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:18:38.685492 env[1193]: time="2025-09-06T00:18:38.685332387Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Sep 6 00:18:38.686228 env[1193]: time="2025-09-06T00:18:38.686074073Z" level=error msg="Failed to pipe stdout of container \"6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831\"" error="reading from a closed fifo" Sep 6 00:18:38.686640 env[1193]: time="2025-09-06T00:18:38.686524968Z" level=error msg="Failed to pipe stderr of container \"6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831\"" error="reading from a closed fifo" Sep 6 00:18:38.690124 env[1193]: time="2025-09-06T00:18:38.690012598Z" level=error msg="StartContainer for \"6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:18:38.690939 kubelet[1901]: E0906 00:18:38.690706 1901 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831" Sep 6 00:18:38.694792 kubelet[1901]: E0906 00:18:38.694657 1901 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 00:18:38.694792 kubelet[1901]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:18:38.694792 kubelet[1901]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:18:38.694792 kubelet[1901]: rm /hostbin/cilium-mount Sep 6 00:18:38.695334 kubelet[1901]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7l4tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9k66w_kube-system(c46c5332-f568-4f13-a09e-155312a264be): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:18:38.695334 kubelet[1901]: > logger="UnhandledError" Sep 6 00:18:38.696832 kubelet[1901]: E0906 00:18:38.696735 1901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9k66w" podUID="c46c5332-f568-4f13-a09e-155312a264be" Sep 6 00:18:38.768772 kubelet[1901]: I0906 00:18:38.768567 1901 setters.go:602] "Node became not ready" node="ci-3510.3.8-n-27671cbf1d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:18:38Z","lastTransitionTime":"2025-09-06T00:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:18:39.126629 kubelet[1901]: E0906 00:18:39.126535 1901 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-8sb7p" podUID="85997cc3-3a99-4c56-9013-fe6c3001c54c" Sep 6 00:18:39.622440 env[1193]: time="2025-09-06T00:18:39.620058647Z" level=info msg="StopPodSandbox for \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\"" Sep 6 00:18:39.622440 env[1193]: time="2025-09-06T00:18:39.620125707Z" level=info msg="Container to stop \"6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:39.622446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd-shm.mount: Deactivated successfully. Sep 6 00:18:39.630867 systemd[1]: cri-containerd-1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd.scope: Deactivated successfully. Sep 6 00:18:39.663050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd-rootfs.mount: Deactivated successfully. Sep 6 00:18:39.671935 env[1193]: time="2025-09-06T00:18:39.671840887Z" level=info msg="shim disconnected" id=1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd Sep 6 00:18:39.672573 env[1193]: time="2025-09-06T00:18:39.672516543Z" level=warning msg="cleaning up after shim disconnected" id=1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd namespace=k8s.io Sep 6 00:18:39.672843 env[1193]: time="2025-09-06T00:18:39.672811828Z" level=info msg="cleaning up dead shim" Sep 6 00:18:39.685698 env[1193]: time="2025-09-06T00:18:39.685634049Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3821 runtime=io.containerd.runc.v2\n" Sep 6 00:18:39.686394 env[1193]: time="2025-09-06T00:18:39.686342396Z" level=info msg="TearDown network for sandbox \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\" successfully" Sep 6 00:18:39.686394 env[1193]: time="2025-09-06T00:18:39.686380632Z" level=info msg="StopPodSandbox for \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\" returns successfully" Sep 6 00:18:39.787119 kubelet[1901]: I0906 00:18:39.787031 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cni-path" (OuterVolumeSpecName: "cni-path") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.787119 kubelet[1901]: I0906 00:18:39.787122 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cni-path\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.787435 kubelet[1901]: I0906 00:18:39.787195 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c46c5332-f568-4f13-a09e-155312a264be-cilium-ipsec-secrets\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.787435 kubelet[1901]: I0906 00:18:39.787226 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cilium-run\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787763 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-host-proc-sys-kernel\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787798 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l4tw\" (UniqueName: \"kubernetes.io/projected/c46c5332-f568-4f13-a09e-155312a264be-kube-api-access-7l4tw\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787813 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-xtables-lock\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787857 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cilium-cgroup\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787876 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-etc-cni-netd\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787892 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-lib-modules\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787931 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c46c5332-f568-4f13-a09e-155312a264be-clustermesh-secrets\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787949 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c46c5332-f568-4f13-a09e-155312a264be-hubble-tls\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787963 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-hostproc\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.787995 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-bpf-maps\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.788013 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c46c5332-f568-4f13-a09e-155312a264be-cilium-config-path\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.788037 1901 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-host-proc-sys-net\") pod \"c46c5332-f568-4f13-a09e-155312a264be\" (UID: \"c46c5332-f568-4f13-a09e-155312a264be\") " Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.788121 1901 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cni-path\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.788174 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.788203 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.788276 kubelet[1901]: I0906 00:18:39.788236 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.789726 kubelet[1901]: I0906 00:18:39.789682 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.790015 kubelet[1901]: I0906 00:18:39.789917 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.790015 kubelet[1901]: I0906 00:18:39.789939 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.790015 kubelet[1901]: I0906 00:18:39.789963 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.799938 kubelet[1901]: I0906 00:18:39.793641 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46c5332-f568-4f13-a09e-155312a264be-kube-api-access-7l4tw" (OuterVolumeSpecName: "kube-api-access-7l4tw") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "kube-api-access-7l4tw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:18:39.799938 kubelet[1901]: I0906 00:18:39.793724 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.799938 kubelet[1901]: I0906 00:18:39.793747 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-hostproc" (OuterVolumeSpecName: "hostproc") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:18:39.799938 kubelet[1901]: I0906 00:18:39.796091 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46c5332-f568-4f13-a09e-155312a264be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:18:39.798787 systemd[1]: var-lib-kubelet-pods-c46c5332\x2df568\x2d4f13\x2da09e\x2d155312a264be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7l4tw.mount: Deactivated successfully. Sep 6 00:18:39.802673 kubelet[1901]: I0906 00:18:39.801451 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46c5332-f568-4f13-a09e-155312a264be-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:18:39.804119 kubelet[1901]: I0906 00:18:39.804074 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46c5332-f568-4f13-a09e-155312a264be-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:18:39.805743 systemd[1]: var-lib-kubelet-pods-c46c5332\x2df568\x2d4f13\x2da09e\x2d155312a264be-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:18:39.811112 kubelet[1901]: I0906 00:18:39.811033 1901 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46c5332-f568-4f13-a09e-155312a264be-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c46c5332-f568-4f13-a09e-155312a264be" (UID: "c46c5332-f568-4f13-a09e-155312a264be"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:18:39.890890 kubelet[1901]: I0906 00:18:39.888938 1901 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-host-proc-sys-net\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.891212 kubelet[1901]: I0906 00:18:39.891172 1901 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c46c5332-f568-4f13-a09e-155312a264be-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.891500 kubelet[1901]: I0906 00:18:39.891477 1901 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cilium-run\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.891600 kubelet[1901]: I0906 00:18:39.891583 1901 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.891689 kubelet[1901]: I0906 00:18:39.891670 1901 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7l4tw\" (UniqueName: \"kubernetes.io/projected/c46c5332-f568-4f13-a09e-155312a264be-kube-api-access-7l4tw\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.891787 kubelet[1901]: I0906 00:18:39.891763 1901 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-xtables-lock\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.891896 kubelet[1901]: I0906 00:18:39.891876 1901 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-cilium-cgroup\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.891980 kubelet[1901]: I0906 00:18:39.891966 1901 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-etc-cni-netd\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.892051 kubelet[1901]: I0906 00:18:39.892038 1901 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-lib-modules\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.892118 kubelet[1901]: I0906 00:18:39.892105 1901 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c46c5332-f568-4f13-a09e-155312a264be-clustermesh-secrets\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.892213 kubelet[1901]: I0906 00:18:39.892197 1901 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c46c5332-f568-4f13-a09e-155312a264be-hubble-tls\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.892326 kubelet[1901]: I0906 00:18:39.892307 1901 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-hostproc\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.892419 kubelet[1901]: I0906 00:18:39.892406 1901 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c46c5332-f568-4f13-a09e-155312a264be-bpf-maps\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:39.892498 kubelet[1901]: I0906 00:18:39.892481 1901 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c46c5332-f568-4f13-a09e-155312a264be-cilium-config-path\") on node \"ci-3510.3.8-n-27671cbf1d\" DevicePath \"\"" Sep 6 00:18:40.134690 systemd[1]: Removed slice kubepods-burstable-podc46c5332_f568_4f13_a09e_155312a264be.slice. Sep 6 00:18:40.280838 systemd[1]: var-lib-kubelet-pods-c46c5332\x2df568\x2d4f13\x2da09e\x2d155312a264be-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:18:40.280976 systemd[1]: var-lib-kubelet-pods-c46c5332\x2df568\x2d4f13\x2da09e\x2d155312a264be-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:18:40.624892 kubelet[1901]: I0906 00:18:40.624468 1901 scope.go:117] "RemoveContainer" containerID="6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831" Sep 6 00:18:40.628984 env[1193]: time="2025-09-06T00:18:40.628624771Z" level=info msg="RemoveContainer for \"6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831\"" Sep 6 00:18:40.643147 env[1193]: time="2025-09-06T00:18:40.643089085Z" level=info msg="RemoveContainer for \"6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831\" returns successfully" Sep 6 00:18:40.716820 kubelet[1901]: I0906 00:18:40.716773 1901 memory_manager.go:355] "RemoveStaleState removing state" podUID="c46c5332-f568-4f13-a09e-155312a264be" containerName="mount-cgroup" Sep 6 00:18:40.724477 systemd[1]: Created slice kubepods-burstable-pod726af668_97de_4524_a7cb_8fc4173c5252.slice. Sep 6 00:18:40.797809 kubelet[1901]: I0906 00:18:40.797742 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-etc-cni-netd\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798123 kubelet[1901]: I0906 00:18:40.798086 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-host-proc-sys-net\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798614 kubelet[1901]: I0906 00:18:40.798549 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/726af668-97de-4524-a7cb-8fc4173c5252-hubble-tls\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798614 kubelet[1901]: I0906 00:18:40.798612 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-cilium-cgroup\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798817 kubelet[1901]: I0906 00:18:40.798637 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-cni-path\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798817 kubelet[1901]: I0906 00:18:40.798667 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-cilium-run\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798817 kubelet[1901]: I0906 00:18:40.798693 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-lib-modules\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798817 kubelet[1901]: I0906 00:18:40.798721 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/726af668-97de-4524-a7cb-8fc4173c5252-cilium-config-path\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798817 kubelet[1901]: I0906 00:18:40.798750 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/726af668-97de-4524-a7cb-8fc4173c5252-clustermesh-secrets\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798817 kubelet[1901]: I0906 00:18:40.798775 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-xtables-lock\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.798817 kubelet[1901]: I0906 00:18:40.798797 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/726af668-97de-4524-a7cb-8fc4173c5252-cilium-ipsec-secrets\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.799160 kubelet[1901]: I0906 00:18:40.798817 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-bpf-maps\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.799160 kubelet[1901]: I0906 00:18:40.798847 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-hostproc\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.799160 kubelet[1901]: I0906 00:18:40.798870 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/726af668-97de-4524-a7cb-8fc4173c5252-host-proc-sys-kernel\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:40.799160 kubelet[1901]: I0906 00:18:40.798923 1901 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkx75\" (UniqueName: \"kubernetes.io/projected/726af668-97de-4524-a7cb-8fc4173c5252-kube-api-access-hkx75\") pod \"cilium-z5z5j\" (UID: \"726af668-97de-4524-a7cb-8fc4173c5252\") " pod="kube-system/cilium-z5z5j" Sep 6 00:18:41.029064 kubelet[1901]: E0906 00:18:41.028995 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:41.030054 env[1193]: time="2025-09-06T00:18:41.029610165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5z5j,Uid:726af668-97de-4524-a7cb-8fc4173c5252,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:41.047810 env[1193]: time="2025-09-06T00:18:41.047686273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:41.048197 env[1193]: time="2025-09-06T00:18:41.048101952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:41.048435 env[1193]: time="2025-09-06T00:18:41.048374836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:41.049657 env[1193]: time="2025-09-06T00:18:41.049542041Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf pid=3849 runtime=io.containerd.runc.v2 Sep 6 00:18:41.075503 systemd[1]: Started cri-containerd-4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf.scope. Sep 6 00:18:41.107329 env[1193]: time="2025-09-06T00:18:41.107234232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5z5j,Uid:726af668-97de-4524-a7cb-8fc4173c5252,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\"" Sep 6 00:18:41.110236 kubelet[1901]: E0906 00:18:41.108619 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:41.113322 env[1193]: time="2025-09-06T00:18:41.113220801Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:18:41.127082 kubelet[1901]: E0906 00:18:41.126514 1901 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-8sb7p" podUID="85997cc3-3a99-4c56-9013-fe6c3001c54c" Sep 6 00:18:41.129139 env[1193]: time="2025-09-06T00:18:41.129022542Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841\"" Sep 6 00:18:41.130982 env[1193]: time="2025-09-06T00:18:41.130882353Z" level=info msg="StartContainer for \"9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841\"" Sep 6 00:18:41.154962 systemd[1]: Started cri-containerd-9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841.scope. Sep 6 00:18:41.195867 env[1193]: time="2025-09-06T00:18:41.195804584Z" level=info msg="StartContainer for \"9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841\" returns successfully" Sep 6 00:18:41.211540 systemd[1]: cri-containerd-9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841.scope: Deactivated successfully. Sep 6 00:18:41.217680 kubelet[1901]: E0906 00:18:41.217542 1901 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:18:41.253059 env[1193]: time="2025-09-06T00:18:41.252978874Z" level=info msg="shim disconnected" id=9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841 Sep 6 00:18:41.253059 env[1193]: time="2025-09-06T00:18:41.253050586Z" level=warning msg="cleaning up after shim disconnected" id=9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841 namespace=k8s.io Sep 6 00:18:41.253059 env[1193]: time="2025-09-06T00:18:41.253065740Z" level=info msg="cleaning up dead shim" Sep 6 00:18:41.266054 env[1193]: time="2025-09-06T00:18:41.265992215Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3933 runtime=io.containerd.runc.v2\n" Sep 6 00:18:41.631622 kubelet[1901]: E0906 00:18:41.631579 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:41.648297 env[1193]: time="2025-09-06T00:18:41.645679849Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:18:41.670620 env[1193]: time="2025-09-06T00:18:41.670372881Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7\"" Sep 6 00:18:41.671210 env[1193]: time="2025-09-06T00:18:41.671176171Z" level=info msg="StartContainer for \"e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7\"" Sep 6 00:18:41.675742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859182192.mount: Deactivated successfully. Sep 6 00:18:41.709961 systemd[1]: Started cri-containerd-e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7.scope. Sep 6 00:18:41.750633 env[1193]: time="2025-09-06T00:18:41.750575314Z" level=info msg="StartContainer for \"e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7\" returns successfully" Sep 6 00:18:41.761896 systemd[1]: cri-containerd-e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7.scope: Deactivated successfully. Sep 6 00:18:41.787035 kubelet[1901]: W0906 00:18:41.786965 1901 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc46c5332_f568_4f13_a09e_155312a264be.slice/cri-containerd-6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831.scope WatchSource:0}: container "6c7e96b939581c5878b60f65abdc75b3dababd450841eff379218beabac39831" in namespace "k8s.io": not found Sep 6 00:18:41.812997 env[1193]: time="2025-09-06T00:18:41.812922533Z" level=info msg="shim disconnected" id=e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7 Sep 6 00:18:41.813543 env[1193]: time="2025-09-06T00:18:41.813489036Z" level=warning msg="cleaning up after shim disconnected" id=e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7 namespace=k8s.io Sep 6 00:18:41.813543 env[1193]: time="2025-09-06T00:18:41.813535147Z" level=info msg="cleaning up dead shim" Sep 6 00:18:41.825774 env[1193]: time="2025-09-06T00:18:41.825705828Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3998 runtime=io.containerd.runc.v2\n" Sep 6 00:18:42.128859 kubelet[1901]: I0906 00:18:42.128820 1901 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46c5332-f568-4f13-a09e-155312a264be" path="/var/lib/kubelet/pods/c46c5332-f568-4f13-a09e-155312a264be/volumes" Sep 6 00:18:42.281094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7-rootfs.mount: Deactivated successfully. Sep 6 00:18:42.636032 kubelet[1901]: E0906 00:18:42.635989 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:42.640804 env[1193]: time="2025-09-06T00:18:42.640709791Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:18:42.655715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974939741.mount: Deactivated successfully. Sep 6 00:18:42.665284 env[1193]: time="2025-09-06T00:18:42.665186317Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad\"" Sep 6 00:18:42.666399 env[1193]: time="2025-09-06T00:18:42.666347419Z" level=info msg="StartContainer for \"90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad\"" Sep 6 00:18:42.757580 systemd[1]: Started cri-containerd-90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad.scope. Sep 6 00:18:42.821383 env[1193]: time="2025-09-06T00:18:42.821325818Z" level=info msg="StartContainer for \"90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad\" returns successfully" Sep 6 00:18:42.833434 systemd[1]: cri-containerd-90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad.scope: Deactivated successfully. Sep 6 00:18:42.878501 env[1193]: time="2025-09-06T00:18:42.878443562Z" level=info msg="shim disconnected" id=90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad Sep 6 00:18:42.878847 env[1193]: time="2025-09-06T00:18:42.878817380Z" level=warning msg="cleaning up after shim disconnected" id=90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad namespace=k8s.io Sep 6 00:18:42.878974 env[1193]: time="2025-09-06T00:18:42.878946924Z" level=info msg="cleaning up dead shim" Sep 6 00:18:42.902271 env[1193]: time="2025-09-06T00:18:42.902087650Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4056 runtime=io.containerd.runc.v2\n" Sep 6 00:18:43.126815 kubelet[1901]: E0906 00:18:43.126739 1901 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-8sb7p" podUID="85997cc3-3a99-4c56-9013-fe6c3001c54c" Sep 6 00:18:43.281564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad-rootfs.mount: Deactivated successfully. Sep 6 00:18:43.640488 kubelet[1901]: E0906 00:18:43.640341 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:43.643216 env[1193]: time="2025-09-06T00:18:43.643172026Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:18:43.660833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835357265.mount: Deactivated successfully. Sep 6 00:18:43.677203 env[1193]: time="2025-09-06T00:18:43.677142140Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d\"" Sep 6 00:18:43.679005 env[1193]: time="2025-09-06T00:18:43.678949260Z" level=info msg="StartContainer for \"11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d\"" Sep 6 00:18:43.721205 systemd[1]: Started cri-containerd-11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d.scope. Sep 6 00:18:43.775122 systemd[1]: cri-containerd-11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d.scope: Deactivated successfully. Sep 6 00:18:43.777694 env[1193]: time="2025-09-06T00:18:43.777626907Z" level=info msg="StartContainer for \"11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d\" returns successfully" Sep 6 00:18:43.811371 env[1193]: time="2025-09-06T00:18:43.811271709Z" level=info msg="shim disconnected" id=11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d Sep 6 00:18:43.811371 env[1193]: time="2025-09-06T00:18:43.811347353Z" level=warning msg="cleaning up after shim disconnected" id=11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d namespace=k8s.io Sep 6 00:18:43.811371 env[1193]: time="2025-09-06T00:18:43.811364603Z" level=info msg="cleaning up dead shim" Sep 6 00:18:43.823613 env[1193]: time="2025-09-06T00:18:43.823545440Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4111 runtime=io.containerd.runc.v2\n" Sep 6 00:18:44.281516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d-rootfs.mount: Deactivated successfully. Sep 6 00:18:44.654628 kubelet[1901]: E0906 00:18:44.654499 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:44.663124 env[1193]: time="2025-09-06T00:18:44.662974019Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:18:44.689542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204687373.mount: Deactivated successfully. Sep 6 00:18:44.699475 env[1193]: time="2025-09-06T00:18:44.698481678Z" level=info msg="CreateContainer within sandbox \"4fd5f8acda28c1408ff73e046c5fe809d202aef89a8246ca076396aa97c98dcf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3f5e7f95c0c92f5896c14bce86a505dc1eb2fe115fc0e07f9acef43c048f5af1\"" Sep 6 00:18:44.705824 env[1193]: time="2025-09-06T00:18:44.704169041Z" level=info msg="StartContainer for \"3f5e7f95c0c92f5896c14bce86a505dc1eb2fe115fc0e07f9acef43c048f5af1\"" Sep 6 00:18:44.751477 systemd[1]: Started cri-containerd-3f5e7f95c0c92f5896c14bce86a505dc1eb2fe115fc0e07f9acef43c048f5af1.scope. Sep 6 00:18:44.820890 env[1193]: time="2025-09-06T00:18:44.820818532Z" level=info msg="StartContainer for \"3f5e7f95c0c92f5896c14bce86a505dc1eb2fe115fc0e07f9acef43c048f5af1\" returns successfully" Sep 6 00:18:44.900589 kubelet[1901]: W0906 00:18:44.900517 1901 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726af668_97de_4524_a7cb_8fc4173c5252.slice/cri-containerd-9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841.scope WatchSource:0}: task 9eee2bcf3030631f91a53fb8c5c2919b87a311bedb56e665ab8f6eca7cb22841 not found: not found Sep 6 00:18:45.126027 kubelet[1901]: E0906 00:18:45.125955 1901 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-8sb7p" podUID="85997cc3-3a99-4c56-9013-fe6c3001c54c" Sep 6 00:18:45.432300 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:18:45.663849 kubelet[1901]: E0906 00:18:45.663767 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:45.697773 kubelet[1901]: I0906 00:18:45.697680 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z5z5j" podStartSLOduration=5.697657708 podStartE2EDuration="5.697657708s" podCreationTimestamp="2025-09-06 00:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:18:45.695420584 +0000 UTC m=+119.908954860" watchObservedRunningTime="2025-09-06 00:18:45.697657708 +0000 UTC m=+119.911191980" Sep 6 00:18:46.083688 env[1193]: time="2025-09-06T00:18:46.083625419Z" level=info msg="StopPodSandbox for \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\"" Sep 6 00:18:46.084151 env[1193]: time="2025-09-06T00:18:46.083739446Z" level=info msg="TearDown network for sandbox \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\" successfully" Sep 6 00:18:46.084151 env[1193]: time="2025-09-06T00:18:46.083776714Z" level=info msg="StopPodSandbox for \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\" returns successfully" Sep 6 00:18:46.085280 env[1193]: time="2025-09-06T00:18:46.084451583Z" level=info msg="RemovePodSandbox for \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\"" Sep 6 00:18:46.085280 env[1193]: time="2025-09-06T00:18:46.084497691Z" level=info msg="Forcibly stopping sandbox \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\"" Sep 6 00:18:46.085280 env[1193]: time="2025-09-06T00:18:46.084588931Z" level=info msg="TearDown network for sandbox \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\" successfully" Sep 6 00:18:46.092169 env[1193]: time="2025-09-06T00:18:46.092117751Z" level=info msg="RemovePodSandbox \"1d1a8b8d875126aa61c3d35dc94342b1471640ffddddcc338ed8914e99ecd7bd\" returns successfully" Sep 6 00:18:46.093180 env[1193]: time="2025-09-06T00:18:46.093131588Z" level=info msg="StopPodSandbox for \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\"" Sep 6 00:18:46.093180 env[1193]: time="2025-09-06T00:18:46.093274171Z" level=info msg="TearDown network for sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" successfully" Sep 6 00:18:46.093180 env[1193]: time="2025-09-06T00:18:46.093322735Z" level=info msg="StopPodSandbox for \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" returns successfully" Sep 6 00:18:46.093907 env[1193]: time="2025-09-06T00:18:46.093866191Z" level=info msg="RemovePodSandbox for \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\"" Sep 6 00:18:46.093959 env[1193]: time="2025-09-06T00:18:46.093910652Z" level=info msg="Forcibly stopping sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\"" Sep 6 00:18:46.094134 env[1193]: time="2025-09-06T00:18:46.094017422Z" level=info msg="TearDown network for sandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" successfully" Sep 6 00:18:46.097649 env[1193]: time="2025-09-06T00:18:46.097586078Z" level=info msg="RemovePodSandbox \"744cd2a87596ea55ebe7b769e5f554f079dbaa3dd1f8b9a2724c3de0b9e7424b\" returns successfully" Sep 6 00:18:46.098506 env[1193]: time="2025-09-06T00:18:46.098313566Z" level=info msg="StopPodSandbox for \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\"" Sep 6 00:18:46.098506 env[1193]: time="2025-09-06T00:18:46.098410710Z" level=info msg="TearDown network for sandbox \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\" successfully" Sep 6 00:18:46.098506 env[1193]: time="2025-09-06T00:18:46.098443895Z" level=info msg="StopPodSandbox for \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\" returns successfully" Sep 6 00:18:46.099201 env[1193]: time="2025-09-06T00:18:46.099159336Z" level=info msg="RemovePodSandbox for \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\"" Sep 6 00:18:46.099310 env[1193]: time="2025-09-06T00:18:46.099204352Z" level=info msg="Forcibly stopping sandbox \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\"" Sep 6 00:18:46.099362 env[1193]: time="2025-09-06T00:18:46.099338143Z" level=info msg="TearDown network for sandbox \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\" successfully" Sep 6 00:18:46.103022 env[1193]: time="2025-09-06T00:18:46.102926561Z" level=info msg="RemovePodSandbox \"f6f1db2de3f5ff674efe06e1c871277c825a6ed0d49ec56476c813527003b8ab\" returns successfully" Sep 6 00:18:46.936556 systemd[1]: run-containerd-runc-k8s.io-3f5e7f95c0c92f5896c14bce86a505dc1eb2fe115fc0e07f9acef43c048f5af1-runc.JHOHWg.mount: Deactivated successfully. Sep 6 00:18:47.030972 kubelet[1901]: E0906 00:18:47.030909 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:47.127228 kubelet[1901]: E0906 00:18:47.127175 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:48.008135 kubelet[1901]: W0906 00:18:48.008067 1901 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726af668_97de_4524_a7cb_8fc4173c5252.slice/cri-containerd-e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7.scope WatchSource:0}: task e37f54a673bb9e2595ae492eaaedc48464966055e292ecad1617d3d7157569e7 not found: not found Sep 6 00:18:49.081820 kubelet[1901]: E0906 00:18:49.081659 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:49.104149 systemd-networkd[1005]: lxc_health: Link UP Sep 6 00:18:49.175747 systemd-networkd[1005]: lxc_health: Gained carrier Sep 6 00:18:49.176303 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:18:49.235739 systemd[1]: run-containerd-runc-k8s.io-3f5e7f95c0c92f5896c14bce86a505dc1eb2fe115fc0e07f9acef43c048f5af1-runc.uLmf6V.mount: Deactivated successfully. Sep 6 00:18:49.671792 kubelet[1901]: E0906 00:18:49.671693 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:50.674231 kubelet[1901]: E0906 00:18:50.674153 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:18:50.890609 systemd-networkd[1005]: lxc_health: Gained IPv6LL Sep 6 00:18:51.136609 kubelet[1901]: W0906 00:18:51.136555 1901 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726af668_97de_4524_a7cb_8fc4173c5252.slice/cri-containerd-90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad.scope WatchSource:0}: task 90d54058a272350e393098bc60968469f0bf288599937d2b1ff779efa3bd16ad not found: not found Sep 6 00:18:53.739013 systemd[1]: run-containerd-runc-k8s.io-3f5e7f95c0c92f5896c14bce86a505dc1eb2fe115fc0e07f9acef43c048f5af1-runc.r5FcYK.mount: Deactivated successfully. Sep 6 00:18:54.249643 kubelet[1901]: W0906 00:18:54.249572 1901 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726af668_97de_4524_a7cb_8fc4173c5252.slice/cri-containerd-11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d.scope WatchSource:0}: task 11594831c34e3b2b5be75b1071e14bb97b49fc2332ab9d021b2e7be2c80f728d not found: not found Sep 6 00:18:55.973432 systemd[1]: run-containerd-runc-k8s.io-3f5e7f95c0c92f5896c14bce86a505dc1eb2fe115fc0e07f9acef43c048f5af1-runc.uzgcny.mount: Deactivated successfully. Sep 6 00:18:56.071077 sshd[3712]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:56.076760 systemd[1]: sshd@28-146.190.126.13:22-147.75.109.163:58384.service: Deactivated successfully. Sep 6 00:18:56.077914 systemd[1]: session-28.scope: Deactivated successfully. Sep 6 00:18:56.081050 systemd-logind[1186]: Session 28 logged out. Waiting for processes to exit. Sep 6 00:18:56.082911 systemd-logind[1186]: Removed session 28.